2026-03-08 00:00:07.318710 | Job console starting 2026-03-08 00:00:07.333274 | Updating git repos 2026-03-08 00:00:07.561592 | Cloning repos into workspace 2026-03-08 00:00:07.820646 | Restoring repo states 2026-03-08 00:00:07.853142 | Merging changes 2026-03-08 00:00:07.853164 | Checking out repos 2026-03-08 00:00:08.309395 | Preparing playbooks 2026-03-08 00:00:09.608606 | Running Ansible setup 2026-03-08 00:00:18.659274 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-08 00:00:20.533729 | 2026-03-08 00:00:20.533855 | PLAY [Base pre] 2026-03-08 00:00:20.588354 | 2026-03-08 00:00:20.588480 | TASK [Setup log path fact] 2026-03-08 00:00:20.621428 | orchestrator | ok 2026-03-08 00:00:20.663429 | 2026-03-08 00:00:20.663572 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-08 00:00:20.713025 | orchestrator | ok 2026-03-08 00:00:20.728948 | 2026-03-08 00:00:20.729073 | TASK [emit-job-header : Print job information] 2026-03-08 00:00:20.806379 | # Job Information 2026-03-08 00:00:20.806546 | Ansible Version: 2.16.14 2026-03-08 00:00:20.806582 | Job: testbed-deploy-stable-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-08 00:00:20.806617 | Pipeline: periodic-midnight 2026-03-08 00:00:20.806642 | Executor: 521e9411259a 2026-03-08 00:00:20.806663 | Triggered by: https://github.com/osism/testbed 2026-03-08 00:00:20.806685 | Event ID: 5a084b4300424527a1d97f8c219c9234 2026-03-08 00:00:20.814160 | 2026-03-08 00:00:20.814273 | LOOP [emit-job-header : Print node information] 2026-03-08 00:00:21.076699 | orchestrator | ok: 2026-03-08 00:00:21.077065 | orchestrator | # Node Information 2026-03-08 00:00:21.077120 | orchestrator | Inventory Hostname: orchestrator 2026-03-08 00:00:21.077143 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-08 00:00:21.077163 | orchestrator | Username: zuul-testbed03 2026-03-08 00:00:21.077180 | orchestrator | Distro: Debian 12.13 2026-03-08 00:00:21.077201 | orchestrator | Provider: static-testbed 2026-03-08 00:00:21.077218 | orchestrator | Region: 2026-03-08 00:00:21.077236 | orchestrator | Label: testbed-orchestrator 2026-03-08 00:00:21.077253 | orchestrator | Product Name: OpenStack Nova 2026-03-08 00:00:21.077269 | orchestrator | Interface IP: 81.163.193.140 2026-03-08 00:00:21.119128 | 2026-03-08 00:00:21.119235 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-08 00:00:22.120041 | orchestrator -> localhost | changed 2026-03-08 00:00:22.126442 | 2026-03-08 00:00:22.126530 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-08 00:00:23.980547 | orchestrator -> localhost | changed 2026-03-08 00:00:23.992162 | 2026-03-08 00:00:23.992260 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-08 00:00:24.869484 | orchestrator -> localhost | ok 2026-03-08 00:00:24.875342 | 2026-03-08 00:00:24.875430 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-08 00:00:24.922615 | orchestrator | ok 2026-03-08 00:00:24.957851 | orchestrator | included: /var/lib/zuul/builds/c7f1f43b9c6c488abd8fa06041d5207b/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-08 00:00:24.977217 | 2026-03-08 00:00:24.977308 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-08 00:00:28.221638 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-08 00:00:28.221805 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/c7f1f43b9c6c488abd8fa06041d5207b/work/c7f1f43b9c6c488abd8fa06041d5207b_id_rsa 2026-03-08 00:00:28.221838 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/c7f1f43b9c6c488abd8fa06041d5207b/work/c7f1f43b9c6c488abd8fa06041d5207b_id_rsa.pub 2026-03-08 00:00:28.221860 | orchestrator -> localhost | The key fingerprint is: 2026-03-08 00:00:28.221883 | orchestrator -> localhost | SHA256:W8HMczxS5tb+sNpiqDzHBss5rzKTeNvNZhGfybezxq8 zuul-build-sshkey 2026-03-08 00:00:28.221903 | orchestrator -> localhost | The key's randomart image is: 2026-03-08 00:00:28.221933 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-08 00:00:28.221952 | orchestrator -> localhost | | o | 2026-03-08 00:00:28.221971 | orchestrator -> localhost | | + = . | 2026-03-08 00:00:28.222015 | orchestrator -> localhost | | B * . | 2026-03-08 00:00:28.222032 | orchestrator -> localhost | | .* o | 2026-03-08 00:00:28.222049 | orchestrator -> localhost | | S .+ oo | 2026-03-08 00:00:28.222069 | orchestrator -> localhost | | .o. = .+ | 2026-03-08 00:00:28.222086 | orchestrator -> localhost | | . o.= o o...| 2026-03-08 00:00:28.222102 | orchestrator -> localhost | | . *o*oB oo= | 2026-03-08 00:00:28.222119 | orchestrator -> localhost | | ..==@+..oE=.| 2026-03-08 00:00:28.222136 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-08 00:00:28.222182 | orchestrator -> localhost | ok: Runtime: 0:00:01.701981 2026-03-08 00:00:28.228216 | 2026-03-08 00:00:28.228298 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-08 00:00:28.285667 | orchestrator | ok 2026-03-08 00:00:28.302160 | orchestrator | included: /var/lib/zuul/builds/c7f1f43b9c6c488abd8fa06041d5207b/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-08 00:00:28.326814 | 2026-03-08 00:00:28.345237 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-08 00:00:28.391690 | orchestrator | skipping: Conditional result was False 2026-03-08 00:00:28.399078 | 2026-03-08 00:00:28.399186 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-08 00:00:29.616946 | orchestrator | changed 2026-03-08 00:00:29.623474 | 2026-03-08 00:00:29.623563 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-08 00:00:29.910936 | orchestrator | ok 2026-03-08 00:00:29.916174 | 2026-03-08 00:00:29.916265 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-08 00:00:30.336549 | orchestrator | ok 2026-03-08 00:00:30.347358 | 2026-03-08 00:00:30.347451 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-08 00:00:30.862524 | orchestrator | ok 2026-03-08 00:00:30.880668 | 2026-03-08 00:00:30.880760 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-08 00:00:30.947094 | orchestrator | skipping: Conditional result was False 2026-03-08 00:00:30.956278 | 2026-03-08 00:00:30.956365 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-08 00:00:32.019865 | orchestrator -> localhost | changed 2026-03-08 00:00:32.035513 | 2026-03-08 00:00:32.035608 | TASK [add-build-sshkey : Add back temp key] 2026-03-08 00:00:32.803199 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/c7f1f43b9c6c488abd8fa06041d5207b/work/c7f1f43b9c6c488abd8fa06041d5207b_id_rsa (zuul-build-sshkey) 2026-03-08 00:00:32.803427 | orchestrator -> localhost | ok: Runtime: 0:00:00.033652 2026-03-08 00:00:32.812492 | 2026-03-08 00:00:32.812579 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-08 00:00:33.469219 | orchestrator | ok 2026-03-08 00:00:33.477407 | 2026-03-08 00:00:33.477498 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-08 00:00:33.524954 | orchestrator | skipping: Conditional result was False 2026-03-08 00:00:33.604584 | 2026-03-08 00:00:33.604684 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-08 00:00:34.028936 | orchestrator | ok 2026-03-08 00:00:34.046771 | 2026-03-08 00:00:34.054923 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-08 00:00:34.113149 | orchestrator | ok 2026-03-08 00:00:34.120628 | 2026-03-08 00:00:34.120736 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-08 00:00:34.700457 | orchestrator -> localhost | ok 2026-03-08 00:00:34.706721 | 2026-03-08 00:00:34.706815 | TASK [validate-host : Collect information about the host] 2026-03-08 00:00:36.156103 | orchestrator | ok 2026-03-08 00:00:36.185677 | 2026-03-08 00:00:36.185785 | TASK [validate-host : Sanitize hostname] 2026-03-08 00:00:36.297115 | orchestrator | ok 2026-03-08 00:00:36.301855 | 2026-03-08 00:00:36.301940 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-08 00:00:37.862029 | orchestrator -> localhost | changed 2026-03-08 00:00:37.867035 | 2026-03-08 00:00:37.867118 | TASK [validate-host : Collect information about zuul worker] 2026-03-08 00:00:38.678327 | orchestrator | ok 2026-03-08 00:00:38.682575 | 2026-03-08 00:00:38.682663 | TASK [validate-host : Write out all zuul information for each host] 2026-03-08 00:00:40.202508 | orchestrator -> localhost | changed 2026-03-08 00:00:40.211458 | 2026-03-08 00:00:40.211595 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-08 00:00:40.566752 | orchestrator | ok 2026-03-08 00:00:40.571759 | 2026-03-08 00:00:40.571854 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-08 00:02:02.688430 | orchestrator | changed: 2026-03-08 00:02:02.688679 | orchestrator | .d..t...... src/ 2026-03-08 00:02:02.688715 | orchestrator | .d..t...... src/github.com/ 2026-03-08 00:02:02.688741 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-08 00:02:02.688764 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-08 00:02:02.688786 | orchestrator | RedHat.yml 2026-03-08 00:02:02.704000 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-08 00:02:02.704018 | orchestrator | RedHat.yml 2026-03-08 00:02:02.704070 | orchestrator | = 2.2.0"... 2026-03-08 00:02:14.944879 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-08 00:02:14.961472 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-03-08 00:02:15.109052 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-08 00:02:16.626385 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-08 00:02:16.687439 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-08 00:02:17.166880 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-08 00:02:17.226296 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-08 00:02:17.988696 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-08 00:02:17.988747 | orchestrator | 2026-03-08 00:02:17.988754 | orchestrator | Providers are signed by their developers. 2026-03-08 00:02:17.988759 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-08 00:02:17.988764 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-08 00:02:17.988771 | orchestrator | 2026-03-08 00:02:17.988775 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-08 00:02:17.988786 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-08 00:02:17.988791 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-08 00:02:17.988795 | orchestrator | you run "tofu init" in the future. 2026-03-08 00:02:17.989061 | orchestrator | 2026-03-08 00:02:17.989076 | orchestrator | OpenTofu has been successfully initialized! 2026-03-08 00:02:17.989091 | orchestrator | 2026-03-08 00:02:17.989096 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-08 00:02:17.989100 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-08 00:02:17.989104 | orchestrator | should now work. 2026-03-08 00:02:17.989124 | orchestrator | 2026-03-08 00:02:17.989128 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-08 00:02:17.989132 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-08 00:02:17.989136 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-08 00:02:18.163483 | orchestrator | Created and switched to workspace "ci"! 2026-03-08 00:02:18.163545 | orchestrator | 2026-03-08 00:02:18.163552 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-08 00:02:18.163558 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-08 00:02:18.163565 | orchestrator | for this configuration. 2026-03-08 00:02:18.326107 | orchestrator | ci.auto.tfvars 2026-03-08 00:02:18.849967 | orchestrator | default_custom.tf 2026-03-08 00:02:24.136133 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-08 00:02:24.694581 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-08 00:02:24.878626 | orchestrator | 2026-03-08 00:02:24.878715 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-08 00:02:24.878726 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-08 00:02:24.878732 | orchestrator | + create 2026-03-08 00:02:24.878739 | orchestrator | <= read (data resources) 2026-03-08 00:02:24.878746 | orchestrator | 2026-03-08 00:02:24.878752 | orchestrator | OpenTofu will perform the following actions: 2026-03-08 00:02:24.878766 | orchestrator | 2026-03-08 00:02:24.878772 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-08 00:02:24.878779 | orchestrator | # (config refers to values not yet known) 2026-03-08 00:02:24.878785 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-08 00:02:24.878790 | orchestrator | + checksum = (known after apply) 2026-03-08 00:02:24.878796 | orchestrator | + created_at = (known after apply) 2026-03-08 00:02:24.878802 | orchestrator | + file = (known after apply) 2026-03-08 00:02:24.878807 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.878833 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:24.878839 | orchestrator | + min_disk_gb = (known after apply) 2026-03-08 00:02:24.878845 | orchestrator | + min_ram_mb = (known after apply) 2026-03-08 00:02:24.878851 | orchestrator | + most_recent = true 2026-03-08 00:02:24.878856 | orchestrator | + name = (known after apply) 2026-03-08 00:02:24.878862 | orchestrator | + protected = (known after apply) 2026-03-08 00:02:24.878867 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.878875 | orchestrator | + schema = (known after apply) 2026-03-08 00:02:24.878881 | orchestrator | + size_bytes = (known after apply) 2026-03-08 00:02:24.878886 | orchestrator | + tags = (known after apply) 2026-03-08 00:02:24.878892 | orchestrator | + updated_at = (known after apply) 2026-03-08 00:02:24.878897 | orchestrator | } 2026-03-08 00:02:24.878903 | orchestrator | 2026-03-08 00:02:24.878908 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-08 00:02:24.878914 | orchestrator | # (config refers to values not yet known) 2026-03-08 00:02:24.878920 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-08 00:02:24.878925 | orchestrator | + checksum = (known after apply) 2026-03-08 00:02:24.878930 | orchestrator | + created_at = (known after apply) 2026-03-08 00:02:24.878936 | orchestrator | + file = (known after apply) 2026-03-08 00:02:24.878941 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.878947 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:24.878952 | orchestrator | + min_disk_gb = (known after apply) 2026-03-08 00:02:24.878957 | orchestrator | + min_ram_mb = (known after apply) 2026-03-08 00:02:24.878963 | orchestrator | + most_recent = true 2026-03-08 00:02:24.878968 | orchestrator | + name = (known after apply) 2026-03-08 00:02:24.878974 | orchestrator | + protected = (known after apply) 2026-03-08 00:02:24.878979 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.878985 | orchestrator | + schema = (known after apply) 2026-03-08 00:02:24.878990 | orchestrator | + size_bytes = (known after apply) 2026-03-08 00:02:24.878995 | orchestrator | + tags = (known after apply) 2026-03-08 00:02:24.879001 | orchestrator | + updated_at = (known after apply) 2026-03-08 00:02:24.879006 | orchestrator | } 2026-03-08 00:02:24.879014 | orchestrator | 2026-03-08 00:02:24.879019 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-08 00:02:24.879025 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-08 00:02:24.879031 | orchestrator | + content = (known after apply) 2026-03-08 00:02:24.879037 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-08 00:02:24.879042 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-08 00:02:24.879047 | orchestrator | + content_md5 = (known after apply) 2026-03-08 00:02:24.879053 | orchestrator | + content_sha1 = (known after apply) 2026-03-08 00:02:24.879058 | orchestrator | + content_sha256 = (known after apply) 2026-03-08 00:02:24.879063 | orchestrator | + content_sha512 = (known after apply) 2026-03-08 00:02:24.879069 | orchestrator | + directory_permission = "0777" 2026-03-08 00:02:24.879074 | orchestrator | + file_permission = "0644" 2026-03-08 00:02:24.879080 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-08 00:02:24.879085 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.879090 | orchestrator | } 2026-03-08 00:02:24.879096 | orchestrator | 2026-03-08 00:02:24.879101 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-08 00:02:24.879106 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-08 00:02:24.879112 | orchestrator | + content = (known after apply) 2026-03-08 00:02:24.879117 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-08 00:02:24.879122 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-08 00:02:24.879128 | orchestrator | + content_md5 = (known after apply) 2026-03-08 00:02:24.879133 | orchestrator | + content_sha1 = (known after apply) 2026-03-08 00:02:24.879139 | orchestrator | + content_sha256 = (known after apply) 2026-03-08 00:02:24.879155 | orchestrator | + content_sha512 = (known after apply) 2026-03-08 00:02:24.879160 | orchestrator | + directory_permission = "0777" 2026-03-08 00:02:24.879166 | orchestrator | + file_permission = "0644" 2026-03-08 00:02:24.879176 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-08 00:02:24.879182 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.879187 | orchestrator | } 2026-03-08 00:02:24.879193 | orchestrator | 2026-03-08 00:02:24.879198 | orchestrator | # local_file.inventory will be created 2026-03-08 00:02:24.879204 | orchestrator | + resource "local_file" "inventory" { 2026-03-08 00:02:24.879209 | orchestrator | + content = (known after apply) 2026-03-08 00:02:24.879214 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-08 00:02:24.879220 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-08 00:02:24.879225 | orchestrator | + content_md5 = (known after apply) 2026-03-08 00:02:24.879230 | orchestrator | + content_sha1 = (known after apply) 2026-03-08 00:02:24.879236 | orchestrator | + content_sha256 = (known after apply) 2026-03-08 00:02:24.879242 | orchestrator | + content_sha512 = (known after apply) 2026-03-08 00:02:24.879247 | orchestrator | + directory_permission = "0777" 2026-03-08 00:02:24.879252 | orchestrator | + file_permission = "0644" 2026-03-08 00:02:24.879258 | orchestrator | + filename = "inventory.ci" 2026-03-08 00:02:24.879263 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.879268 | orchestrator | } 2026-03-08 00:02:24.879274 | orchestrator | 2026-03-08 00:02:24.879279 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-08 00:02:24.879285 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-08 00:02:24.879290 | orchestrator | + content = (sensitive value) 2026-03-08 00:02:24.879295 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-08 00:02:24.879301 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-08 00:02:24.879306 | orchestrator | + content_md5 = (known after apply) 2026-03-08 00:02:24.879312 | orchestrator | + content_sha1 = (known after apply) 2026-03-08 00:02:24.879317 | orchestrator | + content_sha256 = (known after apply) 2026-03-08 00:02:24.879322 | orchestrator | + content_sha512 = (known after apply) 2026-03-08 00:02:24.879328 | orchestrator | + directory_permission = "0700" 2026-03-08 00:02:24.879333 | orchestrator | + file_permission = "0600" 2026-03-08 00:02:24.879339 | orchestrator | + filename = ".id_rsa.ci" 2026-03-08 00:02:24.879344 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.879350 | orchestrator | } 2026-03-08 00:02:24.879355 | orchestrator | 2026-03-08 00:02:24.879360 | orchestrator | # null_resource.node_semaphore will be created 2026-03-08 00:02:24.879366 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-08 00:02:24.879371 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.879377 | orchestrator | } 2026-03-08 00:02:24.879385 | orchestrator | 2026-03-08 00:02:24.879391 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-08 00:02:24.879396 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-08 00:02:24.879402 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:24.879407 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:24.879412 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.879418 | orchestrator | + image_id = (known after apply) 2026-03-08 00:02:24.879423 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:24.879429 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-08 00:02:24.879434 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.879440 | orchestrator | + size = 80 2026-03-08 00:02:24.879445 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:24.879450 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:24.879456 | orchestrator | } 2026-03-08 00:02:24.879461 | orchestrator | 2026-03-08 00:02:24.879467 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-08 00:02:24.879472 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-08 00:02:24.879477 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:24.879483 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:24.879488 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.879497 | orchestrator | + image_id = (known after apply) 2026-03-08 00:02:24.879503 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:24.879508 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-08 00:02:24.879513 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.879519 | orchestrator | + size = 80 2026-03-08 00:02:24.879524 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:24.879530 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:24.879535 | orchestrator | } 2026-03-08 00:02:24.879540 | orchestrator | 2026-03-08 00:02:24.879546 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-08 00:02:24.879551 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-08 00:02:24.879557 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:24.879562 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:24.879567 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.879573 | orchestrator | + image_id = (known after apply) 2026-03-08 00:02:24.879578 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:24.879583 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-08 00:02:24.879589 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.879594 | orchestrator | + size = 80 2026-03-08 00:02:24.879599 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:24.879605 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:24.879610 | orchestrator | } 2026-03-08 00:02:24.879615 | orchestrator | 2026-03-08 00:02:24.879621 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-08 00:02:24.879626 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-08 00:02:24.879632 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:24.879637 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:24.879642 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.879678 | orchestrator | + image_id = (known after apply) 2026-03-08 00:02:24.879684 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:24.879689 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-08 00:02:24.879695 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.879700 | orchestrator | + size = 80 2026-03-08 00:02:24.879709 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:24.879715 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:24.879720 | orchestrator | } 2026-03-08 00:02:24.879726 | orchestrator | 2026-03-08 00:02:24.879731 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-08 00:02:24.879737 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-08 00:02:24.879742 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:24.879748 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:24.879753 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.879759 | orchestrator | + image_id = (known after apply) 2026-03-08 00:02:24.879764 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:24.879770 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-08 00:02:24.879775 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.879780 | orchestrator | + size = 80 2026-03-08 00:02:24.879786 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:24.879791 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:24.879797 | orchestrator | } 2026-03-08 00:02:24.879802 | orchestrator | 2026-03-08 00:02:24.879808 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-08 00:02:24.879813 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-08 00:02:24.879819 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:24.879824 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:24.879830 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.879840 | orchestrator | + image_id = (known after apply) 2026-03-08 00:02:24.879845 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:24.879851 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-08 00:02:24.879856 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.879862 | orchestrator | + size = 80 2026-03-08 00:02:24.879867 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:24.879872 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:24.879878 | orchestrator | } 2026-03-08 00:02:24.879883 | orchestrator | 2026-03-08 00:02:24.879888 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-08 00:02:24.879894 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-08 00:02:24.879899 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:24.879905 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:24.879910 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.879915 | orchestrator | + image_id = (known after apply) 2026-03-08 00:02:24.879921 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:24.879931 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-08 00:02:24.879936 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.879942 | orchestrator | + size = 80 2026-03-08 00:02:24.879947 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:24.879952 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:24.879958 | orchestrator | } 2026-03-08 00:02:24.879963 | orchestrator | 2026-03-08 00:02:24.879968 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-08 00:02:24.879974 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-08 00:02:24.879980 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:24.879985 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:24.879990 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.879996 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:24.880001 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-08 00:02:24.880006 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.880012 | orchestrator | + size = 20 2026-03-08 00:02:24.880017 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:24.880023 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:24.880028 | orchestrator | } 2026-03-08 00:02:24.880034 | orchestrator | 2026-03-08 00:02:24.880039 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-08 00:02:24.880044 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-08 00:02:24.880050 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:24.880055 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:24.880061 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.880066 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:24.880071 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-08 00:02:24.880077 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.880082 | orchestrator | + size = 20 2026-03-08 00:02:24.880088 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:24.880093 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:24.880098 | orchestrator | } 2026-03-08 00:02:24.880104 | orchestrator | 2026-03-08 00:02:24.880109 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-08 00:02:24.880115 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-08 00:02:24.880120 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:24.880125 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:24.880131 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.880136 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:24.880142 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-08 00:02:24.880147 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.880156 | orchestrator | + size = 20 2026-03-08 00:02:24.880162 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:24.880167 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:24.880173 | orchestrator | } 2026-03-08 00:02:24.880178 | orchestrator | 2026-03-08 00:02:24.880183 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-08 00:02:24.880189 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-08 00:02:24.880194 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:24.880200 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:24.880205 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.880213 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:24.880219 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-08 00:02:24.880225 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.880230 | orchestrator | + size = 20 2026-03-08 00:02:24.880235 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:24.880240 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:24.880246 | orchestrator | } 2026-03-08 00:02:24.880251 | orchestrator | 2026-03-08 00:02:24.880257 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-08 00:02:24.880262 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-08 00:02:24.880267 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:24.880273 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:24.880278 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.880283 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:24.880289 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-08 00:02:24.880294 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.880300 | orchestrator | + size = 20 2026-03-08 00:02:24.880305 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:24.880310 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:24.880316 | orchestrator | } 2026-03-08 00:02:24.880321 | orchestrator | 2026-03-08 00:02:24.880327 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-08 00:02:24.880332 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-08 00:02:24.880337 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:24.880343 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:24.880348 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.880354 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:24.880359 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-08 00:02:24.880364 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.880370 | orchestrator | + size = 20 2026-03-08 00:02:24.880375 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:24.880381 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:24.880386 | orchestrator | } 2026-03-08 00:02:24.880392 | orchestrator | 2026-03-08 00:02:24.880397 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-08 00:02:24.880403 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-08 00:02:24.880408 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:24.880413 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:24.880419 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.880424 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:24.880430 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-08 00:02:24.880435 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.880441 | orchestrator | + size = 20 2026-03-08 00:02:24.880446 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:24.880451 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:24.880457 | orchestrator | } 2026-03-08 00:02:24.880462 | orchestrator | 2026-03-08 00:02:24.880471 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-08 00:02:24.880477 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-08 00:02:24.880485 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:24.880491 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:24.880496 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.880502 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:24.880507 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-08 00:02:24.880513 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.880518 | orchestrator | + size = 20 2026-03-08 00:02:24.880523 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:24.880529 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:24.880534 | orchestrator | } 2026-03-08 00:02:24.880540 | orchestrator | 2026-03-08 00:02:24.880545 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-08 00:02:24.880551 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-08 00:02:24.880556 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:24.880562 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:24.880567 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.880572 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:24.880578 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-08 00:02:24.880583 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.880589 | orchestrator | + size = 20 2026-03-08 00:02:24.880594 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:24.880599 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:24.880605 | orchestrator | } 2026-03-08 00:02:24.880611 | orchestrator | 2026-03-08 00:02:24.880616 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-08 00:02:24.880621 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-08 00:02:24.880627 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-08 00:02:24.880632 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-08 00:02:24.880638 | orchestrator | + all_metadata = (known after apply) 2026-03-08 00:02:24.880643 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:24.880666 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:24.880671 | orchestrator | + config_drive = true 2026-03-08 00:02:24.880680 | orchestrator | + created = (known after apply) 2026-03-08 00:02:24.880685 | orchestrator | + flavor_id = (known after apply) 2026-03-08 00:02:24.880691 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-08 00:02:24.880696 | orchestrator | + force_delete = false 2026-03-08 00:02:24.880701 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-08 00:02:24.880707 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.880712 | orchestrator | + image_id = (known after apply) 2026-03-08 00:02:24.880718 | orchestrator | + image_name = (known after apply) 2026-03-08 00:02:24.880723 | orchestrator | + key_pair = "testbed" 2026-03-08 00:02:24.880728 | orchestrator | + name = "testbed-manager" 2026-03-08 00:02:24.880734 | orchestrator | + power_state = "active" 2026-03-08 00:02:24.880739 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.880744 | orchestrator | + security_groups = (known after apply) 2026-03-08 00:02:24.880750 | orchestrator | + stop_before_destroy = false 2026-03-08 00:02:24.880755 | orchestrator | + updated = (known after apply) 2026-03-08 00:02:24.880761 | orchestrator | + user_data = (sensitive value) 2026-03-08 00:02:24.880766 | orchestrator | 2026-03-08 00:02:24.880771 | orchestrator | + block_device { 2026-03-08 00:02:24.880777 | orchestrator | + boot_index = 0 2026-03-08 00:02:24.880783 | orchestrator | + delete_on_termination = false 2026-03-08 00:02:24.880788 | orchestrator | + destination_type = "volume" 2026-03-08 00:02:24.880793 | orchestrator | + multiattach = false 2026-03-08 00:02:24.880799 | orchestrator | + source_type = "volume" 2026-03-08 00:02:24.880804 | orchestrator | + uuid = (known after apply) 2026-03-08 00:02:24.880813 | orchestrator | } 2026-03-08 00:02:24.880819 | orchestrator | 2026-03-08 00:02:24.880824 | orchestrator | + network { 2026-03-08 00:02:24.880830 | orchestrator | + access_network = false 2026-03-08 00:02:24.880835 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-08 00:02:24.880841 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-08 00:02:24.880846 | orchestrator | + mac = (known after apply) 2026-03-08 00:02:24.880851 | orchestrator | + name = (known after apply) 2026-03-08 00:02:24.880857 | orchestrator | + port = (known after apply) 2026-03-08 00:02:24.880862 | orchestrator | + uuid = (known after apply) 2026-03-08 00:02:24.880867 | orchestrator | } 2026-03-08 00:02:24.880873 | orchestrator | } 2026-03-08 00:02:24.880878 | orchestrator | 2026-03-08 00:02:24.880884 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-08 00:02:24.880889 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-08 00:02:24.880895 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-08 00:02:24.880900 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-08 00:02:24.880905 | orchestrator | + all_metadata = (known after apply) 2026-03-08 00:02:24.880911 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:24.880918 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:24.880926 | orchestrator | + config_drive = true 2026-03-08 00:02:24.880935 | orchestrator | + created = (known after apply) 2026-03-08 00:02:24.880943 | orchestrator | + flavor_id = (known after apply) 2026-03-08 00:02:24.880952 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-08 00:02:24.880961 | orchestrator | + force_delete = false 2026-03-08 00:02:24.880973 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-08 00:02:24.880985 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.880994 | orchestrator | + image_id = (known after apply) 2026-03-08 00:02:24.881002 | orchestrator | + image_name = (known after apply) 2026-03-08 00:02:24.881011 | orchestrator | + key_pair = "testbed" 2026-03-08 00:02:24.881020 | orchestrator | + name = "testbed-node-0" 2026-03-08 00:02:24.881028 | orchestrator | + power_state = "active" 2026-03-08 00:02:24.881036 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.881046 | orchestrator | + security_groups = (known after apply) 2026-03-08 00:02:24.881055 | orchestrator | + stop_before_destroy = false 2026-03-08 00:02:24.881063 | orchestrator | + updated = (known after apply) 2026-03-08 00:02:24.881072 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-08 00:02:24.881080 | orchestrator | 2026-03-08 00:02:24.881089 | orchestrator | + block_device { 2026-03-08 00:02:24.881098 | orchestrator | + boot_index = 0 2026-03-08 00:02:24.881112 | orchestrator | + delete_on_termination = false 2026-03-08 00:02:24.881124 | orchestrator | + destination_type = "volume" 2026-03-08 00:02:24.881135 | orchestrator | + multiattach = false 2026-03-08 00:02:24.881143 | orchestrator | + source_type = "volume" 2026-03-08 00:02:24.881150 | orchestrator | + uuid = (known after apply) 2026-03-08 00:02:24.881158 | orchestrator | } 2026-03-08 00:02:24.881166 | orchestrator | 2026-03-08 00:02:24.881174 | orchestrator | + network { 2026-03-08 00:02:24.881182 | orchestrator | + access_network = false 2026-03-08 00:02:24.881190 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-08 00:02:24.881197 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-08 00:02:24.881205 | orchestrator | + mac = (known after apply) 2026-03-08 00:02:24.881213 | orchestrator | + name = (known after apply) 2026-03-08 00:02:24.881220 | orchestrator | + port = (known after apply) 2026-03-08 00:02:24.881228 | orchestrator | + uuid = (known after apply) 2026-03-08 00:02:24.881236 | orchestrator | } 2026-03-08 00:02:24.881244 | orchestrator | } 2026-03-08 00:02:24.881252 | orchestrator | 2026-03-08 00:02:24.881259 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-08 00:02:24.881267 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-08 00:02:24.881275 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-08 00:02:24.881292 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-08 00:02:24.881300 | orchestrator | + all_metadata = (known after apply) 2026-03-08 00:02:24.881308 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:24.881316 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:24.881323 | orchestrator | + config_drive = true 2026-03-08 00:02:24.881331 | orchestrator | + created = (known after apply) 2026-03-08 00:02:24.881339 | orchestrator | + flavor_id = (known after apply) 2026-03-08 00:02:24.881347 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-08 00:02:24.881354 | orchestrator | + force_delete = false 2026-03-08 00:02:24.881362 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-08 00:02:24.881373 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.881387 | orchestrator | + image_id = (known after apply) 2026-03-08 00:02:24.881400 | orchestrator | + image_name = (known after apply) 2026-03-08 00:02:24.881413 | orchestrator | + key_pair = "testbed" 2026-03-08 00:02:24.881426 | orchestrator | + name = "testbed-node-1" 2026-03-08 00:02:24.881438 | orchestrator | + power_state = "active" 2026-03-08 00:02:24.881453 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.881467 | orchestrator | + security_groups = (known after apply) 2026-03-08 00:02:24.881481 | orchestrator | + stop_before_destroy = false 2026-03-08 00:02:24.881493 | orchestrator | + updated = (known after apply) 2026-03-08 00:02:24.881513 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-08 00:02:24.881527 | orchestrator | 2026-03-08 00:02:24.881540 | orchestrator | + block_device { 2026-03-08 00:02:24.881554 | orchestrator | + boot_index = 0 2026-03-08 00:02:24.881566 | orchestrator | + delete_on_termination = false 2026-03-08 00:02:24.881579 | orchestrator | + destination_type = "volume" 2026-03-08 00:02:24.881592 | orchestrator | + multiattach = false 2026-03-08 00:02:24.881605 | orchestrator | + source_type = "volume" 2026-03-08 00:02:24.881619 | orchestrator | + uuid = (known after apply) 2026-03-08 00:02:24.881632 | orchestrator | } 2026-03-08 00:02:24.881676 | orchestrator | 2026-03-08 00:02:24.881691 | orchestrator | + network { 2026-03-08 00:02:24.881705 | orchestrator | + access_network = false 2026-03-08 00:02:24.881719 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-08 00:02:24.881733 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-08 00:02:24.881746 | orchestrator | + mac = (known after apply) 2026-03-08 00:02:24.881757 | orchestrator | + name = (known after apply) 2026-03-08 00:02:24.881770 | orchestrator | + port = (known after apply) 2026-03-08 00:02:24.881782 | orchestrator | + uuid = (known after apply) 2026-03-08 00:02:24.881795 | orchestrator | } 2026-03-08 00:02:24.881808 | orchestrator | } 2026-03-08 00:02:24.881822 | orchestrator | 2026-03-08 00:02:24.881836 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-08 00:02:24.881849 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-08 00:02:24.881863 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-08 00:02:24.881877 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-08 00:02:24.881892 | orchestrator | + all_metadata = (known after apply) 2026-03-08 00:02:24.881905 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:24.881919 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:24.881932 | orchestrator | + config_drive = true 2026-03-08 00:02:24.881946 | orchestrator | + created = (known after apply) 2026-03-08 00:02:24.881960 | orchestrator | + flavor_id = (known after apply) 2026-03-08 00:02:24.881974 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-08 00:02:24.881987 | orchestrator | + force_delete = false 2026-03-08 00:02:24.882000 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-08 00:02:24.882042 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.882060 | orchestrator | + image_id = (known after apply) 2026-03-08 00:02:24.882083 | orchestrator | + image_name = (known after apply) 2026-03-08 00:02:24.882097 | orchestrator | + key_pair = "testbed" 2026-03-08 00:02:24.882110 | orchestrator | + name = "testbed-node-2" 2026-03-08 00:02:24.882122 | orchestrator | + power_state = "active" 2026-03-08 00:02:24.882135 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.882149 | orchestrator | + security_groups = (known after apply) 2026-03-08 00:02:24.882162 | orchestrator | + stop_before_destroy = false 2026-03-08 00:02:24.882176 | orchestrator | + updated = (known after apply) 2026-03-08 00:02:24.882190 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-08 00:02:24.882203 | orchestrator | 2026-03-08 00:02:24.882217 | orchestrator | + block_device { 2026-03-08 00:02:24.882231 | orchestrator | + boot_index = 0 2026-03-08 00:02:24.882244 | orchestrator | + delete_on_termination = false 2026-03-08 00:02:24.882259 | orchestrator | + destination_type = "volume" 2026-03-08 00:02:24.882272 | orchestrator | + multiattach = false 2026-03-08 00:02:24.882285 | orchestrator | + source_type = "volume" 2026-03-08 00:02:24.882299 | orchestrator | + uuid = (known after apply) 2026-03-08 00:02:24.882313 | orchestrator | } 2026-03-08 00:02:24.882327 | orchestrator | 2026-03-08 00:02:24.882340 | orchestrator | + network { 2026-03-08 00:02:24.882353 | orchestrator | + access_network = false 2026-03-08 00:02:24.882367 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-08 00:02:24.882380 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-08 00:02:24.882394 | orchestrator | + mac = (known after apply) 2026-03-08 00:02:24.882414 | orchestrator | + name = (known after apply) 2026-03-08 00:02:24.882422 | orchestrator | + port = (known after apply) 2026-03-08 00:02:24.882430 | orchestrator | + uuid = (known after apply) 2026-03-08 00:02:24.882438 | orchestrator | } 2026-03-08 00:02:24.882446 | orchestrator | } 2026-03-08 00:02:24.882454 | orchestrator | 2026-03-08 00:02:24.882468 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-08 00:02:24.882476 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-08 00:02:24.882484 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-08 00:02:24.882492 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-08 00:02:24.882499 | orchestrator | + all_metadata = (known after apply) 2026-03-08 00:02:24.882507 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:24.882515 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:24.882523 | orchestrator | + config_drive = true 2026-03-08 00:02:24.882531 | orchestrator | + created = (known after apply) 2026-03-08 00:02:24.882539 | orchestrator | + flavor_id = (known after apply) 2026-03-08 00:02:24.882546 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-08 00:02:24.882554 | orchestrator | + force_delete = false 2026-03-08 00:02:24.882562 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-08 00:02:24.882570 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.882578 | orchestrator | + image_id = (known after apply) 2026-03-08 00:02:24.882585 | orchestrator | + image_name = (known after apply) 2026-03-08 00:02:24.882593 | orchestrator | + key_pair = "testbed" 2026-03-08 00:02:24.882601 | orchestrator | + name = "testbed-node-3" 2026-03-08 00:02:24.882609 | orchestrator | + power_state = "active" 2026-03-08 00:02:24.882616 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.882624 | orchestrator | + security_groups = (known after apply) 2026-03-08 00:02:24.882632 | orchestrator | + stop_before_destroy = false 2026-03-08 00:02:24.882640 | orchestrator | + updated = (known after apply) 2026-03-08 00:02:24.882669 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-08 00:02:24.882677 | orchestrator | 2026-03-08 00:02:24.882685 | orchestrator | + block_device { 2026-03-08 00:02:24.882693 | orchestrator | + boot_index = 0 2026-03-08 00:02:24.882701 | orchestrator | + delete_on_termination = false 2026-03-08 00:02:24.882709 | orchestrator | + destination_type = "volume" 2026-03-08 00:02:24.882723 | orchestrator | + multiattach = false 2026-03-08 00:02:24.882731 | orchestrator | + source_type = "volume" 2026-03-08 00:02:24.882739 | orchestrator | + uuid = (known after apply) 2026-03-08 00:02:24.882747 | orchestrator | } 2026-03-08 00:02:24.882755 | orchestrator | 2026-03-08 00:02:24.882763 | orchestrator | + network { 2026-03-08 00:02:24.882771 | orchestrator | + access_network = false 2026-03-08 00:02:24.882779 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-08 00:02:24.882787 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-08 00:02:24.882794 | orchestrator | + mac = (known after apply) 2026-03-08 00:02:24.882802 | orchestrator | + name = (known after apply) 2026-03-08 00:02:24.882810 | orchestrator | + port = (known after apply) 2026-03-08 00:02:24.882818 | orchestrator | + uuid = (known after apply) 2026-03-08 00:02:24.882826 | orchestrator | } 2026-03-08 00:02:24.882834 | orchestrator | } 2026-03-08 00:02:24.882842 | orchestrator | 2026-03-08 00:02:24.882850 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-08 00:02:24.882858 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-08 00:02:24.882866 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-08 00:02:24.882874 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-08 00:02:24.882882 | orchestrator | + all_metadata = (known after apply) 2026-03-08 00:02:24.882889 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:24.882897 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:24.882905 | orchestrator | + config_drive = true 2026-03-08 00:02:24.882913 | orchestrator | + created = (known after apply) 2026-03-08 00:02:24.882921 | orchestrator | + flavor_id = (known after apply) 2026-03-08 00:02:24.882929 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-08 00:02:24.882937 | orchestrator | + force_delete = false 2026-03-08 00:02:24.882944 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-08 00:02:24.882952 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.882960 | orchestrator | + image_id = (known after apply) 2026-03-08 00:02:24.882968 | orchestrator | + image_name = (known after apply) 2026-03-08 00:02:24.882976 | orchestrator | + key_pair = "testbed" 2026-03-08 00:02:24.882984 | orchestrator | + name = "testbed-node-4" 2026-03-08 00:02:24.882992 | orchestrator | + power_state = "active" 2026-03-08 00:02:24.883000 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.883007 | orchestrator | + security_groups = (known after apply) 2026-03-08 00:02:24.883015 | orchestrator | + stop_before_destroy = false 2026-03-08 00:02:24.883023 | orchestrator | + updated = (known after apply) 2026-03-08 00:02:24.883031 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-08 00:02:24.883039 | orchestrator | 2026-03-08 00:02:24.883047 | orchestrator | + block_device { 2026-03-08 00:02:24.883055 | orchestrator | + boot_index = 0 2026-03-08 00:02:24.883063 | orchestrator | + delete_on_termination = false 2026-03-08 00:02:24.883071 | orchestrator | + destination_type = "volume" 2026-03-08 00:02:24.883078 | orchestrator | + multiattach = false 2026-03-08 00:02:24.883086 | orchestrator | + source_type = "volume" 2026-03-08 00:02:24.883094 | orchestrator | + uuid = (known after apply) 2026-03-08 00:02:24.883102 | orchestrator | } 2026-03-08 00:02:24.883110 | orchestrator | 2026-03-08 00:02:24.883118 | orchestrator | + network { 2026-03-08 00:02:24.883126 | orchestrator | + access_network = false 2026-03-08 00:02:24.883134 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-08 00:02:24.883142 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-08 00:02:24.883149 | orchestrator | + mac = (known after apply) 2026-03-08 00:02:24.883157 | orchestrator | + name = (known after apply) 2026-03-08 00:02:24.883165 | orchestrator | + port = (known after apply) 2026-03-08 00:02:24.883173 | orchestrator | + uuid = (known after apply) 2026-03-08 00:02:24.883181 | orchestrator | } 2026-03-08 00:02:24.883188 | orchestrator | } 2026-03-08 00:02:24.883205 | orchestrator | 2026-03-08 00:02:24.883214 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-08 00:02:24.883222 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-08 00:02:24.883230 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-08 00:02:24.883238 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-08 00:02:24.883245 | orchestrator | + all_metadata = (known after apply) 2026-03-08 00:02:24.883257 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:24.883265 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:24.883273 | orchestrator | + config_drive = true 2026-03-08 00:02:24.883281 | orchestrator | + created = (known after apply) 2026-03-08 00:02:24.883289 | orchestrator | + flavor_id = (known after apply) 2026-03-08 00:02:24.883297 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-08 00:02:24.883304 | orchestrator | + force_delete = false 2026-03-08 00:02:24.883312 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-08 00:02:24.883320 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.883328 | orchestrator | + image_id = (known after apply) 2026-03-08 00:02:24.883335 | orchestrator | + image_name = (known after apply) 2026-03-08 00:02:24.883343 | orchestrator | + key_pair = "testbed" 2026-03-08 00:02:24.883351 | orchestrator | + name = "testbed-node-5" 2026-03-08 00:02:24.883359 | orchestrator | + power_state = "active" 2026-03-08 00:02:24.883367 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.883374 | orchestrator | + security_groups = (known after apply) 2026-03-08 00:02:24.883382 | orchestrator | + stop_before_destroy = false 2026-03-08 00:02:24.883390 | orchestrator | + updated = (known after apply) 2026-03-08 00:02:24.883398 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-08 00:02:24.883406 | orchestrator | 2026-03-08 00:02:24.883414 | orchestrator | + block_device { 2026-03-08 00:02:24.883421 | orchestrator | + boot_index = 0 2026-03-08 00:02:24.883429 | orchestrator | + delete_on_termination = false 2026-03-08 00:02:24.883437 | orchestrator | + destination_type = "volume" 2026-03-08 00:02:24.883445 | orchestrator | + multiattach = false 2026-03-08 00:02:24.883452 | orchestrator | + source_type = "volume" 2026-03-08 00:02:24.883460 | orchestrator | + uuid = (known after apply) 2026-03-08 00:02:24.883468 | orchestrator | } 2026-03-08 00:02:24.883476 | orchestrator | 2026-03-08 00:02:24.883484 | orchestrator | + network { 2026-03-08 00:02:24.883492 | orchestrator | + access_network = false 2026-03-08 00:02:24.883500 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-08 00:02:24.883507 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-08 00:02:24.883515 | orchestrator | + mac = (known after apply) 2026-03-08 00:02:24.883523 | orchestrator | + name = (known after apply) 2026-03-08 00:02:24.883531 | orchestrator | + port = (known after apply) 2026-03-08 00:02:24.883539 | orchestrator | + uuid = (known after apply) 2026-03-08 00:02:24.883547 | orchestrator | } 2026-03-08 00:02:24.883555 | orchestrator | } 2026-03-08 00:02:24.883563 | orchestrator | 2026-03-08 00:02:24.883571 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-08 00:02:24.883579 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-08 00:02:24.883587 | orchestrator | + fingerprint = (known after apply) 2026-03-08 00:02:24.883594 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.883602 | orchestrator | + name = "testbed" 2026-03-08 00:02:24.883610 | orchestrator | + private_key = (sensitive value) 2026-03-08 00:02:24.883618 | orchestrator | + public_key = (known after apply) 2026-03-08 00:02:24.883625 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.883633 | orchestrator | + user_id = (known after apply) 2026-03-08 00:02:24.883641 | orchestrator | } 2026-03-08 00:02:24.883666 | orchestrator | 2026-03-08 00:02:24.883674 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-08 00:02:24.883682 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-08 00:02:24.883695 | orchestrator | + device = (known after apply) 2026-03-08 00:02:24.883703 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.883711 | orchestrator | + instance_id = (known after apply) 2026-03-08 00:02:24.883718 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.883732 | orchestrator | + volume_id = (known after apply) 2026-03-08 00:02:24.883740 | orchestrator | } 2026-03-08 00:02:24.883748 | orchestrator | 2026-03-08 00:02:24.883756 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-08 00:02:24.883764 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-08 00:02:24.883772 | orchestrator | + device = (known after apply) 2026-03-08 00:02:24.883779 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.883787 | orchestrator | + instance_id = (known after apply) 2026-03-08 00:02:24.883795 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.883803 | orchestrator | + volume_id = (known after apply) 2026-03-08 00:02:24.883811 | orchestrator | } 2026-03-08 00:02:24.883818 | orchestrator | 2026-03-08 00:02:24.883826 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-08 00:02:24.883834 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-08 00:02:24.883842 | orchestrator | + device = (known after apply) 2026-03-08 00:02:24.883850 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.883858 | orchestrator | + instance_id = (known after apply) 2026-03-08 00:02:24.883865 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.883873 | orchestrator | + volume_id = (known after apply) 2026-03-08 00:02:24.883881 | orchestrator | } 2026-03-08 00:02:24.883889 | orchestrator | 2026-03-08 00:02:24.883897 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-08 00:02:24.883904 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-08 00:02:24.883912 | orchestrator | + device = (known after apply) 2026-03-08 00:02:24.883920 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.883928 | orchestrator | + instance_id = (known after apply) 2026-03-08 00:02:24.883935 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.883943 | orchestrator | + volume_id = (known after apply) 2026-03-08 00:02:24.883951 | orchestrator | } 2026-03-08 00:02:24.883959 | orchestrator | 2026-03-08 00:02:24.883967 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-08 00:02:24.883974 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-08 00:02:24.883982 | orchestrator | + device = (known after apply) 2026-03-08 00:02:24.883990 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.883998 | orchestrator | + instance_id = (known after apply) 2026-03-08 00:02:24.884006 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.884013 | orchestrator | + volume_id = (known after apply) 2026-03-08 00:02:24.884021 | orchestrator | } 2026-03-08 00:02:24.884029 | orchestrator | 2026-03-08 00:02:24.884037 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-08 00:02:24.884045 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-08 00:02:24.884053 | orchestrator | + device = (known after apply) 2026-03-08 00:02:24.884061 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.884068 | orchestrator | + instance_id = (known after apply) 2026-03-08 00:02:24.884083 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.884092 | orchestrator | + volume_id = (known after apply) 2026-03-08 00:02:24.884099 | orchestrator | } 2026-03-08 00:02:24.884107 | orchestrator | 2026-03-08 00:02:24.884115 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-08 00:02:24.884123 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-08 00:02:24.884131 | orchestrator | + device = (known after apply) 2026-03-08 00:02:24.884139 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.884147 | orchestrator | + instance_id = (known after apply) 2026-03-08 00:02:24.884154 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.884167 | orchestrator | + volume_id = (known after apply) 2026-03-08 00:02:24.884175 | orchestrator | } 2026-03-08 00:02:24.884182 | orchestrator | 2026-03-08 00:02:24.884190 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-08 00:02:24.884198 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-08 00:02:24.884206 | orchestrator | + device = (known after apply) 2026-03-08 00:02:24.884214 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.884222 | orchestrator | + instance_id = (known after apply) 2026-03-08 00:02:24.884229 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.884237 | orchestrator | + volume_id = (known after apply) 2026-03-08 00:02:24.884245 | orchestrator | } 2026-03-08 00:02:24.884253 | orchestrator | 2026-03-08 00:02:24.884261 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-08 00:02:24.884269 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-08 00:02:24.884276 | orchestrator | + device = (known after apply) 2026-03-08 00:02:24.884284 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.884292 | orchestrator | + instance_id = (known after apply) 2026-03-08 00:02:24.884300 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.884308 | orchestrator | + volume_id = (known after apply) 2026-03-08 00:02:24.884315 | orchestrator | } 2026-03-08 00:02:24.884323 | orchestrator | 2026-03-08 00:02:24.884331 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-08 00:02:24.884340 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-08 00:02:24.884348 | orchestrator | + fixed_ip = (known after apply) 2026-03-08 00:02:24.884356 | orchestrator | + floating_ip = (known after apply) 2026-03-08 00:02:24.884364 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.884371 | orchestrator | + port_id = (known after apply) 2026-03-08 00:02:24.884379 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.884387 | orchestrator | } 2026-03-08 00:02:24.884395 | orchestrator | 2026-03-08 00:02:24.884402 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-08 00:02:24.884410 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-08 00:02:24.884418 | orchestrator | + address = (known after apply) 2026-03-08 00:02:24.884426 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:24.884437 | orchestrator | + dns_domain = (known after apply) 2026-03-08 00:02:24.884445 | orchestrator | + dns_name = (known after apply) 2026-03-08 00:02:24.884453 | orchestrator | + fixed_ip = (known after apply) 2026-03-08 00:02:24.884461 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.884469 | orchestrator | + pool = "public" 2026-03-08 00:02:24.884477 | orchestrator | + port_id = (known after apply) 2026-03-08 00:02:24.884485 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.884493 | orchestrator | + subnet_id = (known after apply) 2026-03-08 00:02:24.884500 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:24.884508 | orchestrator | } 2026-03-08 00:02:24.884516 | orchestrator | 2026-03-08 00:02:24.884524 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-08 00:02:24.884532 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-08 00:02:24.884540 | orchestrator | + admin_state_up = (known after apply) 2026-03-08 00:02:24.884548 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:24.884555 | orchestrator | + availability_zone_hints = [ 2026-03-08 00:02:24.884563 | orchestrator | + "nova", 2026-03-08 00:02:24.884572 | orchestrator | ] 2026-03-08 00:02:24.884579 | orchestrator | + dns_domain = (known after apply) 2026-03-08 00:02:24.884587 | orchestrator | + external = (known after apply) 2026-03-08 00:02:24.884595 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.884603 | orchestrator | + mtu = (known after apply) 2026-03-08 00:02:24.884611 | orchestrator | + name = "net-testbed-management" 2026-03-08 00:02:24.884619 | orchestrator | + port_security_enabled = (known after apply) 2026-03-08 00:02:24.884632 | orchestrator | + qos_policy_id = (known after apply) 2026-03-08 00:02:24.884640 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.884662 | orchestrator | + shared = (known after apply) 2026-03-08 00:02:24.884671 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:24.884678 | orchestrator | + transparent_vlan = (known after apply) 2026-03-08 00:02:24.884686 | orchestrator | 2026-03-08 00:02:24.884694 | orchestrator | + segments (known after apply) 2026-03-08 00:02:24.884702 | orchestrator | } 2026-03-08 00:02:24.884710 | orchestrator | 2026-03-08 00:02:24.884718 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-08 00:02:24.884726 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-08 00:02:24.884734 | orchestrator | + admin_state_up = (known after apply) 2026-03-08 00:02:24.884741 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-08 00:02:24.884749 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-08 00:02:24.884757 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:24.884765 | orchestrator | + device_id = (known after apply) 2026-03-08 00:02:24.884773 | orchestrator | + device_owner = (known after apply) 2026-03-08 00:02:24.884780 | orchestrator | + dns_assignment = (known after apply) 2026-03-08 00:02:24.884788 | orchestrator | + dns_name = (known after apply) 2026-03-08 00:02:24.884796 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.884804 | orchestrator | + mac_address = (known after apply) 2026-03-08 00:02:24.884812 | orchestrator | + network_id = (known after apply) 2026-03-08 00:02:24.884819 | orchestrator | + port_security_enabled = (known after apply) 2026-03-08 00:02:24.884827 | orchestrator | + qos_policy_id = (known after apply) 2026-03-08 00:02:24.884835 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.884848 | orchestrator | + security_group_ids = (known after apply) 2026-03-08 00:02:24.884857 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:24.884865 | orchestrator | 2026-03-08 00:02:24.884872 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:24.884880 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-08 00:02:24.884888 | orchestrator | } 2026-03-08 00:02:24.884896 | orchestrator | 2026-03-08 00:02:24.884904 | orchestrator | + binding (known after apply) 2026-03-08 00:02:24.884912 | orchestrator | 2026-03-08 00:02:24.884920 | orchestrator | + fixed_ip { 2026-03-08 00:02:24.884928 | orchestrator | + ip_address = "192.168.16.5" 2026-03-08 00:02:24.884936 | orchestrator | + subnet_id = (known after apply) 2026-03-08 00:02:24.884944 | orchestrator | } 2026-03-08 00:02:24.884952 | orchestrator | } 2026-03-08 00:02:24.884960 | orchestrator | 2026-03-08 00:02:24.884967 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-08 00:02:24.884975 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-08 00:02:24.884983 | orchestrator | + admin_state_up = (known after apply) 2026-03-08 00:02:24.884991 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-08 00:02:24.884999 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-08 00:02:24.885007 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:24.885015 | orchestrator | + device_id = (known after apply) 2026-03-08 00:02:24.885022 | orchestrator | + device_owner = (known after apply) 2026-03-08 00:02:24.885030 | orchestrator | + dns_assignment = (known after apply) 2026-03-08 00:02:24.885038 | orchestrator | + dns_name = (known after apply) 2026-03-08 00:02:24.885046 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.885054 | orchestrator | + mac_address = (known after apply) 2026-03-08 00:02:24.885062 | orchestrator | + network_id = (known after apply) 2026-03-08 00:02:24.885069 | orchestrator | + port_security_enabled = (known after apply) 2026-03-08 00:02:24.885077 | orchestrator | + qos_policy_id = (known after apply) 2026-03-08 00:02:24.885085 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.885098 | orchestrator | + security_group_ids = (known after apply) 2026-03-08 00:02:24.885106 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:24.885113 | orchestrator | 2026-03-08 00:02:24.885121 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:24.885129 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-08 00:02:24.885137 | orchestrator | } 2026-03-08 00:02:24.885145 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:24.885153 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-08 00:02:24.885161 | orchestrator | } 2026-03-08 00:02:24.885169 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:24.885177 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-08 00:02:24.885185 | orchestrator | } 2026-03-08 00:02:24.885192 | orchestrator | 2026-03-08 00:02:24.885200 | orchestrator | + binding (known after apply) 2026-03-08 00:02:24.885208 | orchestrator | 2026-03-08 00:02:24.885216 | orchestrator | + fixed_ip { 2026-03-08 00:02:24.885223 | orchestrator | + ip_address = "192.168.16.10" 2026-03-08 00:02:24.885231 | orchestrator | + subnet_id = (known after apply) 2026-03-08 00:02:24.885239 | orchestrator | } 2026-03-08 00:02:24.885247 | orchestrator | } 2026-03-08 00:02:24.885254 | orchestrator | 2026-03-08 00:02:24.885262 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-08 00:02:24.885270 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-08 00:02:24.885282 | orchestrator | + admin_state_up = (known after apply) 2026-03-08 00:02:24.885290 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-08 00:02:24.885298 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-08 00:02:24.885306 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:24.885314 | orchestrator | + device_id = (known after apply) 2026-03-08 00:02:24.885322 | orchestrator | + device_owner = (known after apply) 2026-03-08 00:02:24.885330 | orchestrator | + dns_assignment = (known after apply) 2026-03-08 00:02:24.885338 | orchestrator | + dns_name = (known after apply) 2026-03-08 00:02:24.885345 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.885353 | orchestrator | + mac_address = (known after apply) 2026-03-08 00:02:24.885361 | orchestrator | + network_id = (known after apply) 2026-03-08 00:02:24.885369 | orchestrator | + port_security_enabled = (known after apply) 2026-03-08 00:02:24.885376 | orchestrator | + qos_policy_id = (known after apply) 2026-03-08 00:02:24.885384 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.885392 | orchestrator | + security_group_ids = (known after apply) 2026-03-08 00:02:24.885400 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:24.885407 | orchestrator | 2026-03-08 00:02:24.885415 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:24.885423 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-08 00:02:24.885431 | orchestrator | } 2026-03-08 00:02:24.885439 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:24.885447 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-08 00:02:24.885454 | orchestrator | } 2026-03-08 00:02:24.885462 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:24.885470 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-08 00:02:24.885478 | orchestrator | } 2026-03-08 00:02:24.885485 | orchestrator | 2026-03-08 00:02:24.885493 | orchestrator | + binding (known after apply) 2026-03-08 00:02:24.885501 | orchestrator | 2026-03-08 00:02:24.885509 | orchestrator | + fixed_ip { 2026-03-08 00:02:24.885517 | orchestrator | + ip_address = "192.168.16.11" 2026-03-08 00:02:24.885524 | orchestrator | + subnet_id = (known after apply) 2026-03-08 00:02:24.885532 | orchestrator | } 2026-03-08 00:02:24.885540 | orchestrator | } 2026-03-08 00:02:24.885548 | orchestrator | 2026-03-08 00:02:24.885555 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-08 00:02:24.885563 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-08 00:02:24.885571 | orchestrator | + admin_state_up = (known after apply) 2026-03-08 00:02:24.885579 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-08 00:02:24.885587 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-08 00:02:24.885595 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:24.885607 | orchestrator | + device_id = (known after apply) 2026-03-08 00:02:24.885615 | orchestrator | + device_owner = (known after apply) 2026-03-08 00:02:24.885623 | orchestrator | + dns_assignment = (known after apply) 2026-03-08 00:02:24.885631 | orchestrator | + dns_name = (known after apply) 2026-03-08 00:02:24.885639 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.885685 | orchestrator | + mac_address = (known after apply) 2026-03-08 00:02:24.885694 | orchestrator | + network_id = (known after apply) 2026-03-08 00:02:24.885702 | orchestrator | + port_security_enabled = (known after apply) 2026-03-08 00:02:24.885710 | orchestrator | + qos_policy_id = (known after apply) 2026-03-08 00:02:24.885718 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.885726 | orchestrator | + security_group_ids = (known after apply) 2026-03-08 00:02:24.885739 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:24.885747 | orchestrator | 2026-03-08 00:02:24.885755 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:24.885763 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-08 00:02:24.885771 | orchestrator | } 2026-03-08 00:02:24.885778 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:24.885786 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-08 00:02:24.885794 | orchestrator | } 2026-03-08 00:02:24.885802 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:24.885810 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-08 00:02:24.885818 | orchestrator | } 2026-03-08 00:02:24.885825 | orchestrator | 2026-03-08 00:02:24.885833 | orchestrator | + binding (known after apply) 2026-03-08 00:02:24.885841 | orchestrator | 2026-03-08 00:02:24.885849 | orchestrator | + fixed_ip { 2026-03-08 00:02:24.885857 | orchestrator | + ip_address = "192.168.16.12" 2026-03-08 00:02:24.885865 | orchestrator | + subnet_id = (known after apply) 2026-03-08 00:02:24.885872 | orchestrator | } 2026-03-08 00:02:24.885880 | orchestrator | } 2026-03-08 00:02:24.885888 | orchestrator | 2026-03-08 00:02:24.885896 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-08 00:02:24.885904 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-08 00:02:24.885911 | orchestrator | + admin_state_up = (known after apply) 2026-03-08 00:02:24.885919 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-08 00:02:24.885927 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-08 00:02:24.885935 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:24.885943 | orchestrator | + device_id = (known after apply) 2026-03-08 00:02:24.885951 | orchestrator | + device_owner = (known after apply) 2026-03-08 00:02:24.885958 | orchestrator | + dns_assignment = (known after apply) 2026-03-08 00:02:24.885966 | orchestrator | + dns_name = (known after apply) 2026-03-08 00:02:24.885974 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.885982 | orchestrator | + mac_address = (known after apply) 2026-03-08 00:02:24.885989 | orchestrator | + network_id = (known after apply) 2026-03-08 00:02:24.885997 | orchestrator | + port_security_enabled = (known after apply) 2026-03-08 00:02:24.886005 | orchestrator | + qos_policy_id = (known after apply) 2026-03-08 00:02:24.886034 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.886042 | orchestrator | + security_group_ids = (known after apply) 2026-03-08 00:02:24.886049 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:24.886056 | orchestrator | 2026-03-08 00:02:24.886063 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:24.886069 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-08 00:02:24.886076 | orchestrator | } 2026-03-08 00:02:24.886083 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:24.886089 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-08 00:02:24.886096 | orchestrator | } 2026-03-08 00:02:24.886103 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:24.886109 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-08 00:02:24.886116 | orchestrator | } 2026-03-08 00:02:24.886122 | orchestrator | 2026-03-08 00:02:24.886134 | orchestrator | + binding (known after apply) 2026-03-08 00:02:24.886141 | orchestrator | 2026-03-08 00:02:24.886148 | orchestrator | + fixed_ip { 2026-03-08 00:02:24.886155 | orchestrator | + ip_address = "192.168.16.13" 2026-03-08 00:02:24.886162 | orchestrator | + subnet_id = (known after apply) 2026-03-08 00:02:24.886168 | orchestrator | } 2026-03-08 00:02:24.886175 | orchestrator | } 2026-03-08 00:02:24.886182 | orchestrator | 2026-03-08 00:02:24.886189 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-08 00:02:24.886196 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-08 00:02:24.886203 | orchestrator | + admin_state_up = (known after apply) 2026-03-08 00:02:24.886210 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-08 00:02:24.886217 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-08 00:02:24.886223 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:24.886230 | orchestrator | + device_id = (known after apply) 2026-03-08 00:02:24.886237 | orchestrator | + device_owner = (known after apply) 2026-03-08 00:02:24.886244 | orchestrator | + dns_assignment = (known after apply) 2026-03-08 00:02:24.886250 | orchestrator | + dns_name = (known after apply) 2026-03-08 00:02:24.886261 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.886268 | orchestrator | + mac_address = (known after apply) 2026-03-08 00:02:24.886275 | orchestrator | + network_id = (known after apply) 2026-03-08 00:02:24.886282 | orchestrator | + port_security_enabled = (known after apply) 2026-03-08 00:02:24.886289 | orchestrator | + qos_policy_id = (known after apply) 2026-03-08 00:02:24.886296 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.886302 | orchestrator | + security_group_ids = (known after apply) 2026-03-08 00:02:24.886309 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:24.886317 | orchestrator | 2026-03-08 00:02:24.886324 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:24.886334 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-08 00:02:24.886341 | orchestrator | } 2026-03-08 00:02:24.886348 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:24.886355 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-08 00:02:24.886362 | orchestrator | } 2026-03-08 00:02:24.886368 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:24.886375 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-08 00:02:24.886382 | orchestrator | } 2026-03-08 00:02:24.886389 | orchestrator | 2026-03-08 00:02:24.886396 | orchestrator | + binding (known after apply) 2026-03-08 00:02:24.886402 | orchestrator | 2026-03-08 00:02:24.886409 | orchestrator | + fixed_ip { 2026-03-08 00:02:24.886416 | orchestrator | + ip_address = "192.168.16.14" 2026-03-08 00:02:24.886423 | orchestrator | + subnet_id = (known after apply) 2026-03-08 00:02:24.886430 | orchestrator | } 2026-03-08 00:02:24.886436 | orchestrator | } 2026-03-08 00:02:24.886443 | orchestrator | 2026-03-08 00:02:24.886450 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-08 00:02:24.886457 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-08 00:02:24.886464 | orchestrator | + admin_state_up = (known after apply) 2026-03-08 00:02:24.886471 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-08 00:02:24.886478 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-08 00:02:24.886484 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:24.886491 | orchestrator | + device_id = (known after apply) 2026-03-08 00:02:24.886498 | orchestrator | + device_owner = (known after apply) 2026-03-08 00:02:24.886505 | orchestrator | + dns_assignment = (known after apply) 2026-03-08 00:02:24.886512 | orchestrator | + dns_name = (known after apply) 2026-03-08 00:02:24.886518 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.886525 | orchestrator | + mac_address = (known after apply) 2026-03-08 00:02:24.886532 | orchestrator | + network_id = (known after apply) 2026-03-08 00:02:24.886539 | orchestrator | + port_security_enabled = (known after apply) 2026-03-08 00:02:24.886546 | orchestrator | + qos_policy_id = (known after apply) 2026-03-08 00:02:24.886562 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.886569 | orchestrator | + security_group_ids = (known after apply) 2026-03-08 00:02:24.886576 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:24.886582 | orchestrator | 2026-03-08 00:02:24.886589 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:24.886596 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-08 00:02:24.886602 | orchestrator | } 2026-03-08 00:02:24.886609 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:24.886616 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-08 00:02:24.886622 | orchestrator | } 2026-03-08 00:02:24.886629 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:24.886635 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-08 00:02:24.886642 | orchestrator | } 2026-03-08 00:02:24.886680 | orchestrator | 2026-03-08 00:02:24.886687 | orchestrator | + binding (known after apply) 2026-03-08 00:02:24.886694 | orchestrator | 2026-03-08 00:02:24.886700 | orchestrator | + fixed_ip { 2026-03-08 00:02:24.886707 | orchestrator | + ip_address = "192.168.16.15" 2026-03-08 00:02:24.886714 | orchestrator | + subnet_id = (known after apply) 2026-03-08 00:02:24.886720 | orchestrator | } 2026-03-08 00:02:24.886727 | orchestrator | } 2026-03-08 00:02:24.886733 | orchestrator | 2026-03-08 00:02:24.886740 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-08 00:02:24.886747 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-08 00:02:24.886753 | orchestrator | + force_destroy = false 2026-03-08 00:02:24.886760 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.886767 | orchestrator | + port_id = (known after apply) 2026-03-08 00:02:24.886773 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.886780 | orchestrator | + router_id = (known after apply) 2026-03-08 00:02:24.886787 | orchestrator | + subnet_id = (known after apply) 2026-03-08 00:02:24.886793 | orchestrator | } 2026-03-08 00:02:24.886800 | orchestrator | 2026-03-08 00:02:24.886807 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-08 00:02:24.886813 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-08 00:02:24.886820 | orchestrator | + admin_state_up = (known after apply) 2026-03-08 00:02:24.886826 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:24.886833 | orchestrator | + availability_zone_hints = [ 2026-03-08 00:02:24.886840 | orchestrator | + "nova", 2026-03-08 00:02:24.886846 | orchestrator | ] 2026-03-08 00:02:24.886853 | orchestrator | + distributed = (known after apply) 2026-03-08 00:02:24.886860 | orchestrator | + enable_snat = (known after apply) 2026-03-08 00:02:24.886866 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-08 00:02:24.886873 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-08 00:02:24.886880 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.886886 | orchestrator | + name = "testbed" 2026-03-08 00:02:24.886893 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.886899 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:24.886905 | orchestrator | 2026-03-08 00:02:24.886910 | orchestrator | + external_fixed_ip (known after apply) 2026-03-08 00:02:24.886916 | orchestrator | } 2026-03-08 00:02:24.886922 | orchestrator | 2026-03-08 00:02:24.886928 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-08 00:02:24.886934 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-08 00:02:24.886939 | orchestrator | + description = "ssh" 2026-03-08 00:02:24.886945 | orchestrator | + direction = "ingress" 2026-03-08 00:02:24.886951 | orchestrator | + ethertype = "IPv4" 2026-03-08 00:02:24.886957 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.886962 | orchestrator | + port_range_max = 22 2026-03-08 00:02:24.886968 | orchestrator | + port_range_min = 22 2026-03-08 00:02:24.886974 | orchestrator | + protocol = "tcp" 2026-03-08 00:02:24.886980 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.886990 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-08 00:02:24.886995 | orchestrator | + remote_group_id = (known after apply) 2026-03-08 00:02:24.887001 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-08 00:02:24.887007 | orchestrator | + security_group_id = (known after apply) 2026-03-08 00:02:24.887012 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:24.887018 | orchestrator | } 2026-03-08 00:02:24.887024 | orchestrator | 2026-03-08 00:02:24.887030 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-08 00:02:24.887035 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-08 00:02:24.887041 | orchestrator | + description = "wireguard" 2026-03-08 00:02:24.887047 | orchestrator | + direction = "ingress" 2026-03-08 00:02:24.887053 | orchestrator | + ethertype = "IPv4" 2026-03-08 00:02:24.887058 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.887064 | orchestrator | + port_range_max = 51820 2026-03-08 00:02:24.887070 | orchestrator | + port_range_min = 51820 2026-03-08 00:02:24.887075 | orchestrator | + protocol = "udp" 2026-03-08 00:02:24.887081 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.887087 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-08 00:02:24.887093 | orchestrator | + remote_group_id = (known after apply) 2026-03-08 00:02:24.887099 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-08 00:02:24.887105 | orchestrator | + security_group_id = (known after apply) 2026-03-08 00:02:24.887110 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:24.887116 | orchestrator | } 2026-03-08 00:02:24.887122 | orchestrator | 2026-03-08 00:02:24.887127 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-08 00:02:24.887133 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-08 00:02:24.887143 | orchestrator | + direction = "ingress" 2026-03-08 00:02:24.887149 | orchestrator | + ethertype = "IPv4" 2026-03-08 00:02:24.887155 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.887160 | orchestrator | + protocol = "tcp" 2026-03-08 00:02:24.887166 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.887172 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-08 00:02:24.887177 | orchestrator | + remote_group_id = (known after apply) 2026-03-08 00:02:24.887183 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-08 00:02:24.887189 | orchestrator | + security_group_id = (known after apply) 2026-03-08 00:02:24.887194 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:24.887200 | orchestrator | } 2026-03-08 00:02:24.887206 | orchestrator | 2026-03-08 00:02:24.887216 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-08 00:02:24.887222 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-08 00:02:24.887228 | orchestrator | + direction = "ingress" 2026-03-08 00:02:24.887233 | orchestrator | + ethertype = "IPv4" 2026-03-08 00:02:24.887239 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.887245 | orchestrator | + protocol = "udp" 2026-03-08 00:02:24.887251 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.887256 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-08 00:02:24.887262 | orchestrator | + remote_group_id = (known after apply) 2026-03-08 00:02:24.887268 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-08 00:02:24.887274 | orchestrator | + security_group_id = (known after apply) 2026-03-08 00:02:24.887279 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:24.887285 | orchestrator | } 2026-03-08 00:02:24.887291 | orchestrator | 2026-03-08 00:02:24.887297 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-08 00:02:24.887312 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-08 00:02:24.887318 | orchestrator | + direction = "ingress" 2026-03-08 00:02:24.887323 | orchestrator | + ethertype = "IPv4" 2026-03-08 00:02:24.887329 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.887335 | orchestrator | + protocol = "icmp" 2026-03-08 00:02:24.887341 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.887346 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-08 00:02:24.887352 | orchestrator | + remote_group_id = (known after apply) 2026-03-08 00:02:24.887358 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-08 00:02:24.887364 | orchestrator | + security_group_id = (known after apply) 2026-03-08 00:02:24.887370 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:24.887376 | orchestrator | } 2026-03-08 00:02:24.887381 | orchestrator | 2026-03-08 00:02:24.887387 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-08 00:02:24.887393 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-08 00:02:24.887399 | orchestrator | + direction = "ingress" 2026-03-08 00:02:24.887405 | orchestrator | + ethertype = "IPv4" 2026-03-08 00:02:24.887410 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.887416 | orchestrator | + protocol = "tcp" 2026-03-08 00:02:24.887422 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.887428 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-08 00:02:24.887434 | orchestrator | + remote_group_id = (known after apply) 2026-03-08 00:02:24.887439 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-08 00:02:24.887445 | orchestrator | + security_group_id = (known after apply) 2026-03-08 00:02:24.887451 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:24.887457 | orchestrator | } 2026-03-08 00:02:24.887463 | orchestrator | 2026-03-08 00:02:24.887468 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-08 00:02:24.887474 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-08 00:02:24.887480 | orchestrator | + direction = "ingress" 2026-03-08 00:02:24.887486 | orchestrator | + ethertype = "IPv4" 2026-03-08 00:02:24.887491 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.887497 | orchestrator | + protocol = "udp" 2026-03-08 00:02:24.887503 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.887509 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-08 00:02:24.887514 | orchestrator | + remote_group_id = (known after apply) 2026-03-08 00:02:24.887520 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-08 00:02:24.887526 | orchestrator | + security_group_id = (known after apply) 2026-03-08 00:02:24.887532 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:24.887537 | orchestrator | } 2026-03-08 00:02:24.887543 | orchestrator | 2026-03-08 00:02:24.887549 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-08 00:02:24.887555 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-08 00:02:24.887561 | orchestrator | + direction = "ingress" 2026-03-08 00:02:24.887566 | orchestrator | + ethertype = "IPv4" 2026-03-08 00:02:24.887572 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.887578 | orchestrator | + protocol = "icmp" 2026-03-08 00:02:24.887584 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.887590 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-08 00:02:24.887596 | orchestrator | + remote_group_id = (known after apply) 2026-03-08 00:02:24.887601 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-08 00:02:24.887607 | orchestrator | + security_group_id = (known after apply) 2026-03-08 00:02:24.887613 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:24.887623 | orchestrator | } 2026-03-08 00:02:24.887629 | orchestrator | 2026-03-08 00:02:24.887635 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-08 00:02:24.887640 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-08 00:02:24.887657 | orchestrator | + description = "vrrp" 2026-03-08 00:02:24.887663 | orchestrator | + direction = "ingress" 2026-03-08 00:02:24.887669 | orchestrator | + ethertype = "IPv4" 2026-03-08 00:02:24.887675 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.887681 | orchestrator | + protocol = "112" 2026-03-08 00:02:24.887686 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.887692 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-08 00:02:24.887698 | orchestrator | + remote_group_id = (known after apply) 2026-03-08 00:02:24.887704 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-08 00:02:24.887709 | orchestrator | + security_group_id = (known after apply) 2026-03-08 00:02:24.887715 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:24.887721 | orchestrator | } 2026-03-08 00:02:24.887727 | orchestrator | 2026-03-08 00:02:24.887736 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-08 00:02:24.887742 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-08 00:02:24.887748 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:24.887754 | orchestrator | + description = "management security group" 2026-03-08 00:02:24.887760 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.887765 | orchestrator | + name = "testbed-management" 2026-03-08 00:02:24.887771 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.887777 | orchestrator | + stateful = (known after apply) 2026-03-08 00:02:24.887782 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:24.887788 | orchestrator | } 2026-03-08 00:02:24.887794 | orchestrator | 2026-03-08 00:02:24.887800 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-08 00:02:24.887805 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-08 00:02:24.887811 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:24.887817 | orchestrator | + description = "node security group" 2026-03-08 00:02:24.887823 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.887829 | orchestrator | + name = "testbed-node" 2026-03-08 00:02:24.887834 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.887840 | orchestrator | + stateful = (known after apply) 2026-03-08 00:02:24.887846 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:24.887852 | orchestrator | } 2026-03-08 00:02:24.887857 | orchestrator | 2026-03-08 00:02:24.887863 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-08 00:02:24.887869 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-08 00:02:24.887875 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:24.887880 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-08 00:02:24.887886 | orchestrator | + dns_nameservers = [ 2026-03-08 00:02:24.887892 | orchestrator | + "8.8.8.8", 2026-03-08 00:02:24.887898 | orchestrator | + "9.9.9.9", 2026-03-08 00:02:24.887904 | orchestrator | ] 2026-03-08 00:02:24.887910 | orchestrator | + enable_dhcp = true 2026-03-08 00:02:24.887916 | orchestrator | + gateway_ip = (known after apply) 2026-03-08 00:02:24.887925 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.887931 | orchestrator | + ip_version = 4 2026-03-08 00:02:24.887937 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-08 00:02:24.887943 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-08 00:02:24.887948 | orchestrator | + name = "subnet-testbed-management" 2026-03-08 00:02:24.887954 | orchestrator | + network_id = (known after apply) 2026-03-08 00:02:24.887960 | orchestrator | + no_gateway = false 2026-03-08 00:02:24.887966 | orchestrator | + region = (known after apply) 2026-03-08 00:02:24.887971 | orchestrator | + service_types = (known after apply) 2026-03-08 00:02:24.887981 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:24.887987 | orchestrator | 2026-03-08 00:02:24.887993 | orchestrator | + allocation_pool { 2026-03-08 00:02:24.887999 | orchestrator | + end = "192.168.31.250" 2026-03-08 00:02:24.888005 | orchestrator | + start = "192.168.31.200" 2026-03-08 00:02:24.888010 | orchestrator | } 2026-03-08 00:02:24.888016 | orchestrator | } 2026-03-08 00:02:24.888022 | orchestrator | 2026-03-08 00:02:24.888028 | orchestrator | # terraform_data.image will be created 2026-03-08 00:02:24.888033 | orchestrator | + resource "terraform_data" "image" { 2026-03-08 00:02:24.888039 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.888045 | orchestrator | + input = "Ubuntu 24.04" 2026-03-08 00:02:24.888051 | orchestrator | + output = (known after apply) 2026-03-08 00:02:24.888056 | orchestrator | } 2026-03-08 00:02:24.888062 | orchestrator | 2026-03-08 00:02:24.888068 | orchestrator | # terraform_data.image_node will be created 2026-03-08 00:02:24.888074 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-08 00:02:24.888079 | orchestrator | + id = (known after apply) 2026-03-08 00:02:24.888085 | orchestrator | + input = "Ubuntu 24.04" 2026-03-08 00:02:24.888091 | orchestrator | + output = (known after apply) 2026-03-08 00:02:24.888097 | orchestrator | } 2026-03-08 00:02:24.888102 | orchestrator | 2026-03-08 00:02:24.888108 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-08 00:02:24.888114 | orchestrator | 2026-03-08 00:02:24.888120 | orchestrator | Changes to Outputs: 2026-03-08 00:02:24.888126 | orchestrator | + manager_address = (sensitive value) 2026-03-08 00:02:24.888132 | orchestrator | + private_key = (sensitive value) 2026-03-08 00:02:25.137582 | orchestrator | terraform_data.image: Creating... 2026-03-08 00:02:25.137712 | orchestrator | terraform_data.image_node: Creating... 2026-03-08 00:02:25.137900 | orchestrator | terraform_data.image: Creation complete after 0s [id=cbbc9dc8-dcbb-1628-9637-f0835eda54f8] 2026-03-08 00:02:25.139288 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=6e5c19fe-80de-b4cd-f7b4-a00ba794c7f2] 2026-03-08 00:02:25.162796 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-08 00:02:25.168754 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-08 00:02:25.170051 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-08 00:02:25.172528 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-08 00:02:25.172580 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-08 00:02:25.172961 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-08 00:02:25.177284 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-08 00:02:25.178384 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-08 00:02:25.181317 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-08 00:02:25.181363 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-08 00:02:25.616841 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-08 00:02:25.624099 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-08 00:02:25.663108 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-03-08 00:02:25.670943 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-08 00:02:25.942515 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-08 00:02:25.949974 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-08 00:02:27.888340 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 3s [id=cd6cee4f-0bc7-4380-8063-a9d428d6b342] 2026-03-08 00:02:27.899894 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-08 00:02:28.828289 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=2c822381-711e-4b88-8f0f-ccd9d68009a2] 2026-03-08 00:02:28.831125 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=584a8cd2-f1cf-4783-b73b-bdfda5fabfa8] 2026-03-08 00:02:28.841435 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=875b6ffb-6cc0-40cb-be90-c8d29b416698] 2026-03-08 00:02:28.846389 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-08 00:02:28.851109 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=b92b8eda66883a5c43952910032b5bf749b3c376] 2026-03-08 00:02:28.855435 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-08 00:02:28.856427 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-08 00:02:28.856521 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-08 00:02:28.859715 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=9d9faa04-2f3c-436d-9a5f-1631de10dde0] 2026-03-08 00:02:28.860312 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=23040f686f4fd599e5ad4528e211c312c273b27b] 2026-03-08 00:02:28.863910 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-08 00:02:28.865350 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-08 00:02:28.870146 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=09c0cde5-d1e2-470c-ab2a-905eda1e5751] 2026-03-08 00:02:28.876436 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-08 00:02:28.910845 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=c3f7b7f4-f798-492f-86ac-7ce39be70087] 2026-03-08 00:02:28.918402 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-08 00:02:28.932825 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=272aa0da-7148-40c6-996c-fa485e579a0c] 2026-03-08 00:02:28.943504 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-08 00:02:28.947961 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=13127977-6a78-466e-81ef-45b79edafbaf] 2026-03-08 00:02:29.174298 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=4f1e8e18-dbd4-42e7-a856-4b9aa5f72ffe] 2026-03-08 00:02:30.603791 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 2s [id=a84c6f04-6caa-4ede-9af3-aab4d8ea4e15] 2026-03-08 00:02:30.612430 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-08 00:02:31.298388 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=6e2a3860-4d11-4727-ba87-fc00fc627f2e] 2026-03-08 00:02:32.305410 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=bf175d06-491f-4ffa-8a22-9754e6fb303e] 2026-03-08 00:02:32.620578 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=81fff5b5-671f-4cf3-9542-12bc3254aff6] 2026-03-08 00:02:32.620617 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=0df70171-b12b-4f0f-b69e-ca5c94cd8fa7] 2026-03-08 00:02:32.620629 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=02cca760-af2e-4d6c-87bb-7fb3c7fbc633] 2026-03-08 00:02:32.620662 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=ef0824cf-3936-4a61-9f2a-e804dfd60cf7] 2026-03-08 00:02:32.620674 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=8e2d3775-a8c9-4c11-b3d1-42f82657682c] 2026-03-08 00:02:34.626282 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=3535f383-9277-4d82-9fb2-8632f1ad4b72] 2026-03-08 00:02:34.637669 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-08 00:02:34.637759 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-08 00:02:34.637772 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-08 00:02:34.830185 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=d4914d21-d412-495b-bea5-61d2a821cf3a] 2026-03-08 00:02:34.836771 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-08 00:02:34.837456 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-08 00:02:34.842771 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-08 00:02:34.844425 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-08 00:02:34.846306 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-08 00:02:34.846803 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-08 00:02:34.881479 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=9d45f688-5320-4ecf-8ffc-928cdc740fe1] 2026-03-08 00:02:34.893393 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-08 00:02:34.893826 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-08 00:02:34.894112 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-08 00:02:35.014611 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=4d21036f-4dbd-422a-b35b-cd465ed9338c] 2026-03-08 00:02:35.020142 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-08 00:02:35.066072 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=bd7ce049-d043-4706-b1ff-cdc3c0e38ced] 2026-03-08 00:02:35.078510 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-08 00:02:35.181315 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=f1757912-f45f-4020-a059-35b56afdb354] 2026-03-08 00:02:35.192446 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-08 00:02:35.233724 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=e63d3e3d-1a68-4315-b4e5-3e60f5a08157] 2026-03-08 00:02:35.243789 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-08 00:02:35.399688 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=5b5820b0-b5d6-4e88-9b84-9d59fb1d1ddf] 2026-03-08 00:02:35.411261 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-08 00:02:35.472979 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=b686e73e-d553-44fa-be8c-4b70cbf610dd] 2026-03-08 00:02:35.485585 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-08 00:02:35.671004 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=b8b775e9-8230-4b9b-a319-3fe8373ea332] 2026-03-08 00:02:35.686959 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-08 00:02:35.732732 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=7560de2f-9fbb-499c-8389-5a7f821c619d] 2026-03-08 00:02:35.872817 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=bf96625a-532f-4082-a9c1-56770cda0c03] 2026-03-08 00:02:36.424653 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=af84bd44-5147-47ec-b1eb-da3685bca599] 2026-03-08 00:02:36.460247 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=30920aa8-560a-44e3-8f7a-3be871049af9] 2026-03-08 00:02:36.527729 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=0bbfe3c3-4177-49fe-bac7-afc87a220441] 2026-03-08 00:02:36.532315 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=7e6dd57d-ee2c-44c6-ac2b-f589d14acb78] 2026-03-08 00:02:37.009761 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 2s [id=0bc3cf1a-faa0-49ac-b130-c0ae635e39d6] 2026-03-08 00:02:37.341747 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=c9907550-5654-43e6-b24f-f0ccfab8a90f] 2026-03-08 00:02:37.591256 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 3s [id=46cb19b3-abeb-4066-8d53-ca86ab39c3f6] 2026-03-08 00:02:38.443723 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=c36b6945-f23f-4f3c-859e-84e909b93157] 2026-03-08 00:02:38.458271 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-08 00:02:38.474796 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-08 00:02:38.483127 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-08 00:02:38.488761 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-08 00:02:38.489234 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-08 00:02:38.489589 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-08 00:02:38.491806 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-08 00:02:40.194848 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=db14e3f2-41c7-4bf1-a013-0357c6847368] 2026-03-08 00:02:40.206393 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-08 00:02:40.210869 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-08 00:02:40.212010 | orchestrator | local_file.inventory: Creating... 2026-03-08 00:02:40.216017 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=347652824d08b02f1020772c4c35b38a28a870bd] 2026-03-08 00:02:40.216458 | orchestrator | local_file.inventory: Creation complete after 0s [id=f918d8374f57babeefc9b03f69301a0269388b14] 2026-03-08 00:02:41.062198 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=db14e3f2-41c7-4bf1-a013-0357c6847368] 2026-03-08 00:02:48.475455 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-08 00:02:48.483739 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-08 00:02:48.488947 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-08 00:02:48.492366 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-08 00:02:48.492464 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-08 00:02:48.493728 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-08 00:02:58.475968 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-08 00:02:58.484204 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-08 00:02:58.489498 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-08 00:02:58.492773 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-08 00:02:58.492822 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-08 00:02:58.493874 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-08 00:02:59.072807 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=e8114bb1-f42f-4c82-a4a4-b9a4483cefce] 2026-03-08 00:02:59.175257 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=e3815e33-95dd-4387-a453-ab80469ea305] 2026-03-08 00:02:59.233522 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=21e191b0-22dc-477d-b2a2-0f2931fe4635] 2026-03-08 00:03:08.476298 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-08 00:03:08.485009 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-03-08 00:03:08.493642 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-08 00:03:09.071557 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=7461cffe-f270-4339-8a96-0e71b1748da1] 2026-03-08 00:03:09.151756 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=5fac3c64-05bc-4ff3-af9b-f701a72d2fd7] 2026-03-08 00:03:09.276317 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=e9793450-14a9-4e3a-9fad-0a7d3e53d2b5] 2026-03-08 00:03:09.291360 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-08 00:03:09.298541 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-08 00:03:09.301239 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-08 00:03:09.306390 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-08 00:03:09.321344 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=2825021609448892418] 2026-03-08 00:03:09.324573 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-08 00:03:09.327694 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-08 00:03:09.328105 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-08 00:03:09.328323 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-08 00:03:09.333474 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-08 00:03:09.347308 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-08 00:03:09.353257 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-08 00:03:12.698265 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=21e191b0-22dc-477d-b2a2-0f2931fe4635/c3f7b7f4-f798-492f-86ac-7ce39be70087] 2026-03-08 00:03:12.708689 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=7461cffe-f270-4339-8a96-0e71b1748da1/4f1e8e18-dbd4-42e7-a856-4b9aa5f72ffe] 2026-03-08 00:03:12.744826 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=5fac3c64-05bc-4ff3-af9b-f701a72d2fd7/584a8cd2-f1cf-4783-b73b-bdfda5fabfa8] 2026-03-08 00:03:18.827752 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=21e191b0-22dc-477d-b2a2-0f2931fe4635/09c0cde5-d1e2-470c-ab2a-905eda1e5751] 2026-03-08 00:03:18.837874 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=7461cffe-f270-4339-8a96-0e71b1748da1/13127977-6a78-466e-81ef-45b79edafbaf] 2026-03-08 00:03:18.859984 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=5fac3c64-05bc-4ff3-af9b-f701a72d2fd7/272aa0da-7148-40c6-996c-fa485e579a0c] 2026-03-08 00:03:18.875370 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=5fac3c64-05bc-4ff3-af9b-f701a72d2fd7/2c822381-711e-4b88-8f0f-ccd9d68009a2] 2026-03-08 00:03:18.885307 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=21e191b0-22dc-477d-b2a2-0f2931fe4635/875b6ffb-6cc0-40cb-be90-c8d29b416698] 2026-03-08 00:03:18.919961 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=7461cffe-f270-4339-8a96-0e71b1748da1/9d9faa04-2f3c-436d-9a5f-1631de10dde0] 2026-03-08 00:03:19.358392 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-08 00:03:29.358874 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-08 00:03:29.735948 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=9f04ec1c-087a-4bf1-9a30-4f7af9538144] 2026-03-08 00:03:29.755207 | orchestrator | 2026-03-08 00:03:29.755268 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-08 00:03:29.755300 | orchestrator | 2026-03-08 00:03:29.755308 | orchestrator | Outputs: 2026-03-08 00:03:29.755314 | orchestrator | 2026-03-08 00:03:29.755335 | orchestrator | manager_address = 2026-03-08 00:03:29.755343 | orchestrator | private_key = 2026-03-08 00:03:30.174110 | orchestrator | ok: Runtime: 0:01:15.557529 2026-03-08 00:03:30.194139 | 2026-03-08 00:03:30.194263 | TASK [Fetch manager address] 2026-03-08 00:03:30.692122 | orchestrator | ok 2026-03-08 00:03:30.703095 | 2026-03-08 00:03:30.703227 | TASK [Set manager_host address] 2026-03-08 00:03:30.784817 | orchestrator | ok 2026-03-08 00:03:30.794489 | 2026-03-08 00:03:30.794642 | LOOP [Update ansible collections] 2026-03-08 00:03:32.162483 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-08 00:03:32.162972 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-08 00:03:32.163050 | orchestrator | Starting galaxy collection install process 2026-03-08 00:03:32.163093 | orchestrator | Process install dependency map 2026-03-08 00:03:32.163127 | orchestrator | Starting collection install process 2026-03-08 00:03:32.163161 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-03-08 00:03:32.163201 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-03-08 00:03:32.163246 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-08 00:03:32.163318 | orchestrator | ok: Item: commons Runtime: 0:00:01.006117 2026-03-08 00:03:33.184682 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-08 00:03:33.184801 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-08 00:03:33.184830 | orchestrator | Starting galaxy collection install process 2026-03-08 00:03:33.184852 | orchestrator | Process install dependency map 2026-03-08 00:03:33.184873 | orchestrator | Starting collection install process 2026-03-08 00:03:33.184894 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-03-08 00:03:33.184914 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-03-08 00:03:33.184933 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-08 00:03:33.184965 | orchestrator | ok: Item: services Runtime: 0:00:00.725562 2026-03-08 00:03:33.204176 | 2026-03-08 00:03:33.204435 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-08 00:03:43.890956 | orchestrator | ok 2026-03-08 00:03:43.899276 | 2026-03-08 00:03:43.899405 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-08 00:04:43.944378 | orchestrator | ok 2026-03-08 00:04:43.953864 | 2026-03-08 00:04:43.954057 | TASK [Fetch manager ssh hostkey] 2026-03-08 00:04:45.527401 | orchestrator | Output suppressed because no_log was given 2026-03-08 00:04:45.543253 | 2026-03-08 00:04:45.543431 | TASK [Get ssh keypair from terraform environment] 2026-03-08 00:04:46.084636 | orchestrator | ok: Runtime: 0:00:00.007059 2026-03-08 00:04:46.099555 | 2026-03-08 00:04:46.099776 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-08 00:04:46.142795 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-08 00:04:46.151171 | 2026-03-08 00:04:46.151333 | TASK [Run manager part 0] 2026-03-08 00:04:47.105196 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-08 00:04:47.183596 | orchestrator | 2026-03-08 00:04:47.183653 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-08 00:04:47.183660 | orchestrator | 2026-03-08 00:04:47.183674 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-08 00:04:49.062187 | orchestrator | ok: [testbed-manager] 2026-03-08 00:04:49.062235 | orchestrator | 2026-03-08 00:04:49.062258 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-08 00:04:49.062267 | orchestrator | 2026-03-08 00:04:49.062276 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-08 00:04:50.982599 | orchestrator | ok: [testbed-manager] 2026-03-08 00:04:50.982672 | orchestrator | 2026-03-08 00:04:50.982683 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-08 00:04:51.665207 | orchestrator | ok: [testbed-manager] 2026-03-08 00:04:51.665247 | orchestrator | 2026-03-08 00:04:51.665257 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-08 00:04:51.704034 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:04:51.704071 | orchestrator | 2026-03-08 00:04:51.704079 | orchestrator | TASK [Update package cache] **************************************************** 2026-03-08 00:04:51.732284 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:04:51.732319 | orchestrator | 2026-03-08 00:04:51.732326 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-08 00:04:51.761992 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:04:51.762107 | orchestrator | 2026-03-08 00:04:51.762113 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-08 00:04:51.786124 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:04:51.786164 | orchestrator | 2026-03-08 00:04:51.786174 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-08 00:04:51.814071 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:04:51.814118 | orchestrator | 2026-03-08 00:04:51.814132 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-08 00:04:51.859971 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:04:51.860122 | orchestrator | 2026-03-08 00:04:51.860133 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-08 00:04:51.888459 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:04:51.888495 | orchestrator | 2026-03-08 00:04:51.888541 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-08 00:04:52.598934 | orchestrator | changed: [testbed-manager] 2026-03-08 00:04:52.598979 | orchestrator | 2026-03-08 00:04:52.598985 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-08 00:07:47.990569 | orchestrator | changed: [testbed-manager] 2026-03-08 00:07:47.990626 | orchestrator | 2026-03-08 00:07:47.990640 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-08 00:09:30.160591 | orchestrator | changed: [testbed-manager] 2026-03-08 00:09:30.160794 | orchestrator | 2026-03-08 00:09:30.160816 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-08 00:09:52.690627 | orchestrator | changed: [testbed-manager] 2026-03-08 00:09:52.690670 | orchestrator | 2026-03-08 00:09:52.690680 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-08 00:10:01.271668 | orchestrator | changed: [testbed-manager] 2026-03-08 00:10:01.271708 | orchestrator | 2026-03-08 00:10:01.271715 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-08 00:10:01.309484 | orchestrator | ok: [testbed-manager] 2026-03-08 00:10:01.309529 | orchestrator | 2026-03-08 00:10:01.309539 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-08 00:10:02.126080 | orchestrator | ok: [testbed-manager] 2026-03-08 00:10:02.126131 | orchestrator | 2026-03-08 00:10:02.126144 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-08 00:10:02.859076 | orchestrator | changed: [testbed-manager] 2026-03-08 00:10:02.859138 | orchestrator | 2026-03-08 00:10:02.859149 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-08 00:10:08.854560 | orchestrator | changed: [testbed-manager] 2026-03-08 00:10:08.854635 | orchestrator | 2026-03-08 00:10:08.854682 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-08 00:10:14.863821 | orchestrator | changed: [testbed-manager] 2026-03-08 00:10:14.863913 | orchestrator | 2026-03-08 00:10:14.863933 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-08 00:10:17.456132 | orchestrator | changed: [testbed-manager] 2026-03-08 00:10:17.456174 | orchestrator | 2026-03-08 00:10:17.456182 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-08 00:10:19.199106 | orchestrator | changed: [testbed-manager] 2026-03-08 00:10:19.199372 | orchestrator | 2026-03-08 00:10:19.199399 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-08 00:10:20.296107 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-08 00:10:20.296194 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-08 00:10:20.296210 | orchestrator | 2026-03-08 00:10:20.296222 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-08 00:10:20.363854 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-08 00:10:20.363934 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-08 00:10:20.363948 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-08 00:10:20.363961 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-08 00:10:28.252738 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-08 00:10:28.252823 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-08 00:10:28.252834 | orchestrator | 2026-03-08 00:10:28.252843 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-08 00:10:28.815321 | orchestrator | changed: [testbed-manager] 2026-03-08 00:10:28.815406 | orchestrator | 2026-03-08 00:10:28.815422 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-08 00:11:49.661959 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-08 00:11:49.662092 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-08 00:11:49.662113 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-08 00:11:49.662126 | orchestrator | 2026-03-08 00:11:49.662138 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-08 00:11:51.951949 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-08 00:11:51.952064 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-08 00:11:51.952078 | orchestrator | 2026-03-08 00:11:51.952089 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-08 00:11:51.952100 | orchestrator | 2026-03-08 00:11:51.952111 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-08 00:11:53.347011 | orchestrator | ok: [testbed-manager] 2026-03-08 00:11:53.347110 | orchestrator | 2026-03-08 00:11:53.347128 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-08 00:11:53.397541 | orchestrator | ok: [testbed-manager] 2026-03-08 00:11:53.397585 | orchestrator | 2026-03-08 00:11:53.397592 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-08 00:11:53.471931 | orchestrator | ok: [testbed-manager] 2026-03-08 00:11:53.471993 | orchestrator | 2026-03-08 00:11:53.471999 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-08 00:11:54.265048 | orchestrator | changed: [testbed-manager] 2026-03-08 00:11:54.265149 | orchestrator | 2026-03-08 00:11:54.265162 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-08 00:11:54.968152 | orchestrator | changed: [testbed-manager] 2026-03-08 00:11:54.968210 | orchestrator | 2026-03-08 00:11:54.968218 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-08 00:11:56.287648 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-08 00:11:56.287735 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-08 00:11:56.287750 | orchestrator | 2026-03-08 00:11:56.287778 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-08 00:11:57.724388 | orchestrator | changed: [testbed-manager] 2026-03-08 00:11:57.724516 | orchestrator | 2026-03-08 00:11:57.724534 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-08 00:11:59.419708 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-08 00:11:59.419752 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-08 00:11:59.419759 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-08 00:11:59.419765 | orchestrator | 2026-03-08 00:11:59.419772 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-08 00:11:59.480809 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:11:59.480943 | orchestrator | 2026-03-08 00:11:59.480967 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-08 00:11:59.555533 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:11:59.555635 | orchestrator | 2026-03-08 00:11:59.555659 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-08 00:12:00.138149 | orchestrator | changed: [testbed-manager] 2026-03-08 00:12:00.138231 | orchestrator | 2026-03-08 00:12:00.138247 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-08 00:12:00.225234 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:12:00.225296 | orchestrator | 2026-03-08 00:12:00.225305 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-08 00:12:01.070970 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-08 00:12:01.071053 | orchestrator | changed: [testbed-manager] 2026-03-08 00:12:01.071068 | orchestrator | 2026-03-08 00:12:01.071080 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-08 00:12:01.110199 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:12:01.110279 | orchestrator | 2026-03-08 00:12:01.110293 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-08 00:12:01.152862 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:12:01.152974 | orchestrator | 2026-03-08 00:12:01.152991 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-08 00:12:01.195459 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:12:01.195526 | orchestrator | 2026-03-08 00:12:01.195542 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-08 00:12:01.266818 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:12:01.266857 | orchestrator | 2026-03-08 00:12:01.266864 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-08 00:12:01.974534 | orchestrator | ok: [testbed-manager] 2026-03-08 00:12:01.974579 | orchestrator | 2026-03-08 00:12:01.974589 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-08 00:12:01.974598 | orchestrator | 2026-03-08 00:12:01.974607 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-08 00:12:03.373508 | orchestrator | ok: [testbed-manager] 2026-03-08 00:12:03.373588 | orchestrator | 2026-03-08 00:12:03.373612 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-08 00:12:04.305268 | orchestrator | changed: [testbed-manager] 2026-03-08 00:12:04.305304 | orchestrator | 2026-03-08 00:12:04.305309 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:12:04.305315 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-03-08 00:12:04.305320 | orchestrator | 2026-03-08 00:12:04.486531 | orchestrator | ok: Runtime: 0:07:17.957364 2026-03-08 00:12:04.496114 | 2026-03-08 00:12:04.496228 | TASK [Point out that the log in on the manager is now possible] 2026-03-08 00:12:04.542759 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-08 00:12:04.552053 | 2026-03-08 00:12:04.552174 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-08 00:12:04.597256 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-08 00:12:04.605378 | 2026-03-08 00:12:04.605486 | TASK [Run manager part 1 + 2] 2026-03-08 00:12:05.630441 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-08 00:12:05.722694 | orchestrator | 2026-03-08 00:12:05.722810 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-08 00:12:05.722839 | orchestrator | 2026-03-08 00:12:05.722908 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-08 00:12:08.682389 | orchestrator | ok: [testbed-manager] 2026-03-08 00:12:08.682478 | orchestrator | 2026-03-08 00:12:08.682524 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-08 00:12:08.725424 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:12:08.725473 | orchestrator | 2026-03-08 00:12:08.725482 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-08 00:12:08.783389 | orchestrator | ok: [testbed-manager] 2026-03-08 00:12:08.783449 | orchestrator | 2026-03-08 00:12:08.783460 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-08 00:12:08.828989 | orchestrator | ok: [testbed-manager] 2026-03-08 00:12:08.829036 | orchestrator | 2026-03-08 00:12:08.829043 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-08 00:12:08.902411 | orchestrator | ok: [testbed-manager] 2026-03-08 00:12:08.902462 | orchestrator | 2026-03-08 00:12:08.902470 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-08 00:12:08.978819 | orchestrator | ok: [testbed-manager] 2026-03-08 00:12:08.978889 | orchestrator | 2026-03-08 00:12:08.978898 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-08 00:12:09.020454 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-08 00:12:09.020507 | orchestrator | 2026-03-08 00:12:09.020513 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-08 00:12:09.705730 | orchestrator | ok: [testbed-manager] 2026-03-08 00:12:09.705829 | orchestrator | 2026-03-08 00:12:09.705859 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-08 00:12:09.754375 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:12:09.754427 | orchestrator | 2026-03-08 00:12:09.754434 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-08 00:12:11.114627 | orchestrator | changed: [testbed-manager] 2026-03-08 00:12:11.114683 | orchestrator | 2026-03-08 00:12:11.114691 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-08 00:12:11.654978 | orchestrator | ok: [testbed-manager] 2026-03-08 00:12:11.655036 | orchestrator | 2026-03-08 00:12:11.655042 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-08 00:12:12.744950 | orchestrator | changed: [testbed-manager] 2026-03-08 00:12:12.745028 | orchestrator | 2026-03-08 00:12:12.745045 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-08 00:12:28.587216 | orchestrator | changed: [testbed-manager] 2026-03-08 00:12:28.587448 | orchestrator | 2026-03-08 00:12:28.587473 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-08 00:12:29.293830 | orchestrator | ok: [testbed-manager] 2026-03-08 00:12:29.293911 | orchestrator | 2026-03-08 00:12:29.293954 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-08 00:12:29.352778 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:12:29.352840 | orchestrator | 2026-03-08 00:12:29.352866 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-08 00:12:30.323573 | orchestrator | changed: [testbed-manager] 2026-03-08 00:12:30.323646 | orchestrator | 2026-03-08 00:12:30.323662 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-08 00:12:31.279631 | orchestrator | changed: [testbed-manager] 2026-03-08 00:12:31.279713 | orchestrator | 2026-03-08 00:12:31.279727 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-08 00:12:31.877674 | orchestrator | changed: [testbed-manager] 2026-03-08 00:12:31.877747 | orchestrator | 2026-03-08 00:12:31.877760 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-08 00:12:31.926525 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-08 00:12:31.926650 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-08 00:12:31.926677 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-08 00:12:31.926699 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-08 00:12:34.039347 | orchestrator | changed: [testbed-manager] 2026-03-08 00:12:34.039401 | orchestrator | 2026-03-08 00:12:34.039409 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-08 00:12:42.910316 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-08 00:12:42.910416 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-08 00:12:42.910434 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-08 00:12:42.910446 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-08 00:12:42.910465 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-08 00:12:42.910476 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-08 00:12:42.910487 | orchestrator | 2026-03-08 00:12:42.910499 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-08 00:12:43.951723 | orchestrator | changed: [testbed-manager] 2026-03-08 00:12:43.951768 | orchestrator | 2026-03-08 00:12:43.951781 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-03-08 00:12:43.992578 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:12:43.992643 | orchestrator | 2026-03-08 00:12:43.992653 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-08 00:12:47.090002 | orchestrator | changed: [testbed-manager] 2026-03-08 00:12:47.090118 | orchestrator | 2026-03-08 00:12:47.090135 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-08 00:12:47.137366 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:12:47.137456 | orchestrator | 2026-03-08 00:12:47.137474 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-08 00:14:31.294378 | orchestrator | changed: [testbed-manager] 2026-03-08 00:14:31.294468 | orchestrator | 2026-03-08 00:14:31.294488 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-08 00:14:32.421866 | orchestrator | ok: [testbed-manager] 2026-03-08 00:14:32.421904 | orchestrator | 2026-03-08 00:14:32.421910 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:14:32.421916 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-03-08 00:14:32.421920 | orchestrator | 2026-03-08 00:14:32.980063 | orchestrator | ok: Runtime: 0:02:27.633348 2026-03-08 00:14:32.998170 | 2026-03-08 00:14:32.998353 | TASK [Reboot manager] 2026-03-08 00:14:34.537478 | orchestrator | ok: Runtime: 0:00:00.935969 2026-03-08 00:14:34.556320 | 2026-03-08 00:14:34.556509 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-08 00:14:49.390903 | orchestrator | ok 2026-03-08 00:14:49.401835 | 2026-03-08 00:14:49.401974 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-08 00:15:49.443904 | orchestrator | ok 2026-03-08 00:15:49.452807 | 2026-03-08 00:15:49.452929 | TASK [Deploy manager + bootstrap nodes] 2026-03-08 00:15:51.947342 | orchestrator | 2026-03-08 00:15:51.947558 | orchestrator | # DEPLOY MANAGER 2026-03-08 00:15:51.947583 | orchestrator | 2026-03-08 00:15:51.947598 | orchestrator | + set -e 2026-03-08 00:15:51.947612 | orchestrator | + echo 2026-03-08 00:15:51.947658 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-08 00:15:51.947677 | orchestrator | + echo 2026-03-08 00:15:51.947727 | orchestrator | + cat /opt/manager-vars.sh 2026-03-08 00:15:51.950920 | orchestrator | export NUMBER_OF_NODES=6 2026-03-08 00:15:51.951086 | orchestrator | 2026-03-08 00:15:51.951108 | orchestrator | export CEPH_VERSION=reef 2026-03-08 00:15:51.951125 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-08 00:15:51.951138 | orchestrator | export MANAGER_VERSION=9.5.0 2026-03-08 00:15:51.951167 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-08 00:15:51.951178 | orchestrator | 2026-03-08 00:15:51.951197 | orchestrator | export ARA=false 2026-03-08 00:15:51.951209 | orchestrator | export DEPLOY_MODE=manager 2026-03-08 00:15:51.951227 | orchestrator | export TEMPEST=true 2026-03-08 00:15:51.951239 | orchestrator | export IS_ZUUL=true 2026-03-08 00:15:51.951251 | orchestrator | 2026-03-08 00:15:51.951268 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.24 2026-03-08 00:15:51.951280 | orchestrator | export EXTERNAL_API=false 2026-03-08 00:15:51.951292 | orchestrator | 2026-03-08 00:15:51.951302 | orchestrator | export IMAGE_USER=ubuntu 2026-03-08 00:15:51.951317 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-08 00:15:51.951328 | orchestrator | 2026-03-08 00:15:51.951339 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-08 00:15:51.951363 | orchestrator | 2026-03-08 00:15:51.951375 | orchestrator | + echo 2026-03-08 00:15:51.951388 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-08 00:15:51.952007 | orchestrator | ++ export INTERACTIVE=false 2026-03-08 00:15:51.952029 | orchestrator | ++ INTERACTIVE=false 2026-03-08 00:15:51.952040 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-08 00:15:51.952057 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-08 00:15:51.952361 | orchestrator | + source /opt/manager-vars.sh 2026-03-08 00:15:51.952380 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-08 00:15:51.952392 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-08 00:15:51.952403 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-08 00:15:51.952414 | orchestrator | ++ CEPH_VERSION=reef 2026-03-08 00:15:51.952479 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-08 00:15:51.952493 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-08 00:15:51.952505 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-08 00:15:51.952536 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-08 00:15:51.952553 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-08 00:15:51.952575 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-08 00:15:51.952587 | orchestrator | ++ export ARA=false 2026-03-08 00:15:51.952598 | orchestrator | ++ ARA=false 2026-03-08 00:15:51.952647 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-08 00:15:51.952661 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-08 00:15:51.952672 | orchestrator | ++ export TEMPEST=true 2026-03-08 00:15:51.952683 | orchestrator | ++ TEMPEST=true 2026-03-08 00:15:51.952694 | orchestrator | ++ export IS_ZUUL=true 2026-03-08 00:15:51.952704 | orchestrator | ++ IS_ZUUL=true 2026-03-08 00:15:51.952715 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.24 2026-03-08 00:15:51.952726 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.24 2026-03-08 00:15:51.952737 | orchestrator | ++ export EXTERNAL_API=false 2026-03-08 00:15:51.952748 | orchestrator | ++ EXTERNAL_API=false 2026-03-08 00:15:51.952759 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-08 00:15:51.952770 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-08 00:15:51.952781 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-08 00:15:51.952792 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-08 00:15:51.952803 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-08 00:15:51.952814 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-08 00:15:51.952829 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-08 00:15:52.009771 | orchestrator | + docker version 2026-03-08 00:15:52.116241 | orchestrator | Client: Docker Engine - Community 2026-03-08 00:15:52.116347 | orchestrator | Version: 27.5.1 2026-03-08 00:15:52.116363 | orchestrator | API version: 1.47 2026-03-08 00:15:52.116378 | orchestrator | Go version: go1.22.11 2026-03-08 00:15:52.116389 | orchestrator | Git commit: 9f9e405 2026-03-08 00:15:52.116401 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-08 00:15:52.116413 | orchestrator | OS/Arch: linux/amd64 2026-03-08 00:15:52.116424 | orchestrator | Context: default 2026-03-08 00:15:52.116435 | orchestrator | 2026-03-08 00:15:52.116446 | orchestrator | Server: Docker Engine - Community 2026-03-08 00:15:52.116458 | orchestrator | Engine: 2026-03-08 00:15:52.116469 | orchestrator | Version: 27.5.1 2026-03-08 00:15:52.116481 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-08 00:15:52.116522 | orchestrator | Go version: go1.22.11 2026-03-08 00:15:52.116534 | orchestrator | Git commit: 4c9b3b0 2026-03-08 00:15:52.116545 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-08 00:15:52.116555 | orchestrator | OS/Arch: linux/amd64 2026-03-08 00:15:52.116566 | orchestrator | Experimental: false 2026-03-08 00:15:52.116577 | orchestrator | containerd: 2026-03-08 00:15:52.116588 | orchestrator | Version: v2.2.1 2026-03-08 00:15:52.116599 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-03-08 00:15:52.116611 | orchestrator | runc: 2026-03-08 00:15:52.116621 | orchestrator | Version: 1.3.4 2026-03-08 00:15:52.116668 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-08 00:15:52.116679 | orchestrator | docker-init: 2026-03-08 00:15:52.116690 | orchestrator | Version: 0.19.0 2026-03-08 00:15:52.116702 | orchestrator | GitCommit: de40ad0 2026-03-08 00:15:52.119657 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-08 00:15:52.129574 | orchestrator | + set -e 2026-03-08 00:15:52.129699 | orchestrator | + source /opt/manager-vars.sh 2026-03-08 00:15:52.129717 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-08 00:15:52.129740 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-08 00:15:52.129758 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-08 00:15:52.129776 | orchestrator | ++ CEPH_VERSION=reef 2026-03-08 00:15:52.129794 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-08 00:15:52.129849 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-08 00:15:52.129863 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-08 00:15:52.129883 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-08 00:15:52.129895 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-08 00:15:52.129906 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-08 00:15:52.129917 | orchestrator | ++ export ARA=false 2026-03-08 00:15:52.129928 | orchestrator | ++ ARA=false 2026-03-08 00:15:52.129954 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-08 00:15:52.129974 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-08 00:15:52.129993 | orchestrator | ++ export TEMPEST=true 2026-03-08 00:15:52.130012 | orchestrator | ++ TEMPEST=true 2026-03-08 00:15:52.130132 | orchestrator | ++ export IS_ZUUL=true 2026-03-08 00:15:52.130146 | orchestrator | ++ IS_ZUUL=true 2026-03-08 00:15:52.130157 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.24 2026-03-08 00:15:52.130168 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.24 2026-03-08 00:15:52.130186 | orchestrator | ++ export EXTERNAL_API=false 2026-03-08 00:15:52.130205 | orchestrator | ++ EXTERNAL_API=false 2026-03-08 00:15:52.130223 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-08 00:15:52.130242 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-08 00:15:52.130259 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-08 00:15:52.130278 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-08 00:15:52.130291 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-08 00:15:52.130302 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-08 00:15:52.130313 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-08 00:15:52.130324 | orchestrator | ++ export INTERACTIVE=false 2026-03-08 00:15:52.130335 | orchestrator | ++ INTERACTIVE=false 2026-03-08 00:15:52.130346 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-08 00:15:52.130362 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-08 00:15:52.130387 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-08 00:15:52.130407 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-03-08 00:15:52.137607 | orchestrator | + set -e 2026-03-08 00:15:52.138210 | orchestrator | + VERSION=9.5.0 2026-03-08 00:15:52.138243 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-03-08 00:15:52.145990 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-08 00:15:52.146098 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-08 00:15:52.150333 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-08 00:15:52.153430 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-08 00:15:52.160994 | orchestrator | /opt/configuration ~ 2026-03-08 00:15:52.161056 | orchestrator | + set -e 2026-03-08 00:15:52.161069 | orchestrator | + pushd /opt/configuration 2026-03-08 00:15:52.161081 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-08 00:15:52.162434 | orchestrator | + source /opt/venv/bin/activate 2026-03-08 00:15:52.163344 | orchestrator | ++ deactivate nondestructive 2026-03-08 00:15:52.163382 | orchestrator | ++ '[' -n '' ']' 2026-03-08 00:15:52.163398 | orchestrator | ++ '[' -n '' ']' 2026-03-08 00:15:52.163457 | orchestrator | ++ hash -r 2026-03-08 00:15:52.163478 | orchestrator | ++ '[' -n '' ']' 2026-03-08 00:15:52.163501 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-08 00:15:52.163513 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-08 00:15:52.163524 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-08 00:15:52.163536 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-08 00:15:52.163548 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-08 00:15:52.163558 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-08 00:15:52.163570 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-08 00:15:52.163581 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-08 00:15:52.163593 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-08 00:15:52.163604 | orchestrator | ++ export PATH 2026-03-08 00:15:52.163616 | orchestrator | ++ '[' -n '' ']' 2026-03-08 00:15:52.163651 | orchestrator | ++ '[' -z '' ']' 2026-03-08 00:15:52.163663 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-08 00:15:52.163674 | orchestrator | ++ PS1='(venv) ' 2026-03-08 00:15:52.163685 | orchestrator | ++ export PS1 2026-03-08 00:15:52.163696 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-08 00:15:52.163707 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-08 00:15:52.163718 | orchestrator | ++ hash -r 2026-03-08 00:15:52.163729 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-08 00:15:53.253708 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-08 00:15:53.254554 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-03-08 00:15:53.256186 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-08 00:15:53.257724 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-08 00:15:53.258921 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-08 00:15:53.269044 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-08 00:15:53.270431 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-08 00:15:53.271328 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-08 00:15:53.272654 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-08 00:15:53.302710 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.5) 2026-03-08 00:15:53.304030 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-08 00:15:53.305739 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-08 00:15:53.307161 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-08 00:15:53.310910 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-08 00:15:53.527679 | orchestrator | ++ which gilt 2026-03-08 00:15:53.531451 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-08 00:15:53.531507 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-08 00:15:53.772193 | orchestrator | osism.cfg-generics: 2026-03-08 00:15:53.911612 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-08 00:15:53.911716 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-08 00:15:53.911913 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-08 00:15:53.911996 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-08 00:15:54.458385 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-08 00:15:54.471026 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-08 00:15:54.800267 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-08 00:15:54.843895 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-08 00:15:54.844002 | orchestrator | + deactivate 2026-03-08 00:15:54.844021 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-08 00:15:54.844035 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-08 00:15:54.844048 | orchestrator | + export PATH 2026-03-08 00:15:54.844061 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-08 00:15:54.844074 | orchestrator | + '[' -n '' ']' 2026-03-08 00:15:54.844088 | orchestrator | + hash -r 2026-03-08 00:15:54.844101 | orchestrator | + '[' -n '' ']' 2026-03-08 00:15:54.844114 | orchestrator | + unset VIRTUAL_ENV 2026-03-08 00:15:54.844126 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-08 00:15:54.844138 | orchestrator | ~ 2026-03-08 00:15:54.844150 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-08 00:15:54.844162 | orchestrator | + unset -f deactivate 2026-03-08 00:15:54.844174 | orchestrator | + popd 2026-03-08 00:15:54.845768 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-08 00:15:54.845791 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-08 00:15:54.846370 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-08 00:15:54.903113 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-08 00:15:54.903241 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-08 00:15:54.903310 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-08 00:15:54.965007 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-08 00:15:54.965123 | orchestrator | ++ semver 2024.2 2025.1 2026-03-08 00:15:55.023708 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-08 00:15:55.023807 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-08 00:15:55.114208 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-08 00:15:55.114342 | orchestrator | + source /opt/venv/bin/activate 2026-03-08 00:15:55.114359 | orchestrator | ++ deactivate nondestructive 2026-03-08 00:15:55.114384 | orchestrator | ++ '[' -n '' ']' 2026-03-08 00:15:55.114397 | orchestrator | ++ '[' -n '' ']' 2026-03-08 00:15:55.114408 | orchestrator | ++ hash -r 2026-03-08 00:15:55.114490 | orchestrator | ++ '[' -n '' ']' 2026-03-08 00:15:55.114504 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-08 00:15:55.114515 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-08 00:15:55.114527 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-08 00:15:55.114543 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-08 00:15:55.114555 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-08 00:15:55.114600 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-08 00:15:55.114612 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-08 00:15:55.114662 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-08 00:15:55.114721 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-08 00:15:55.114735 | orchestrator | ++ export PATH 2026-03-08 00:15:55.114775 | orchestrator | ++ '[' -n '' ']' 2026-03-08 00:15:55.114794 | orchestrator | ++ '[' -z '' ']' 2026-03-08 00:15:55.114813 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-08 00:15:55.114858 | orchestrator | ++ PS1='(venv) ' 2026-03-08 00:15:55.114878 | orchestrator | ++ export PS1 2026-03-08 00:15:55.114903 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-08 00:15:55.114959 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-08 00:15:55.114980 | orchestrator | ++ hash -r 2026-03-08 00:15:55.115037 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-08 00:15:56.133741 | orchestrator | 2026-03-08 00:15:56.133848 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-08 00:15:56.133864 | orchestrator | 2026-03-08 00:15:56.133876 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-08 00:15:56.689320 | orchestrator | ok: [testbed-manager] 2026-03-08 00:15:56.689431 | orchestrator | 2026-03-08 00:15:56.689449 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-08 00:15:57.658958 | orchestrator | changed: [testbed-manager] 2026-03-08 00:15:57.659057 | orchestrator | 2026-03-08 00:15:57.659073 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-08 00:15:57.659119 | orchestrator | 2026-03-08 00:15:57.659131 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-08 00:15:59.865826 | orchestrator | ok: [testbed-manager] 2026-03-08 00:15:59.865952 | orchestrator | 2026-03-08 00:15:59.865971 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-08 00:15:59.917387 | orchestrator | ok: [testbed-manager] 2026-03-08 00:15:59.917491 | orchestrator | 2026-03-08 00:15:59.917507 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-08 00:16:00.366962 | orchestrator | changed: [testbed-manager] 2026-03-08 00:16:00.367055 | orchestrator | 2026-03-08 00:16:00.367073 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-08 00:16:00.399374 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:16:00.399460 | orchestrator | 2026-03-08 00:16:00.399475 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-08 00:16:00.749231 | orchestrator | changed: [testbed-manager] 2026-03-08 00:16:00.749335 | orchestrator | 2026-03-08 00:16:00.749351 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-08 00:16:01.085372 | orchestrator | ok: [testbed-manager] 2026-03-08 00:16:01.085451 | orchestrator | 2026-03-08 00:16:01.085460 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-08 00:16:01.225447 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:16:01.225565 | orchestrator | 2026-03-08 00:16:01.225590 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-08 00:16:01.225611 | orchestrator | 2026-03-08 00:16:01.225712 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-08 00:16:03.008351 | orchestrator | ok: [testbed-manager] 2026-03-08 00:16:03.008488 | orchestrator | 2026-03-08 00:16:03.008506 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-08 00:16:03.111927 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-08 00:16:03.112053 | orchestrator | 2026-03-08 00:16:03.112089 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-08 00:16:03.179055 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-08 00:16:03.179159 | orchestrator | 2026-03-08 00:16:03.179175 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-08 00:16:04.279747 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-08 00:16:04.279850 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-08 00:16:04.279865 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-08 00:16:04.279878 | orchestrator | 2026-03-08 00:16:04.279893 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-08 00:16:06.057884 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-08 00:16:06.057988 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-08 00:16:06.058003 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-08 00:16:06.058138 | orchestrator | 2026-03-08 00:16:06.058166 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-08 00:16:06.678991 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-08 00:16:06.679090 | orchestrator | changed: [testbed-manager] 2026-03-08 00:16:06.679101 | orchestrator | 2026-03-08 00:16:06.679109 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-08 00:16:07.314592 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-08 00:16:07.314759 | orchestrator | changed: [testbed-manager] 2026-03-08 00:16:07.314777 | orchestrator | 2026-03-08 00:16:07.314790 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-08 00:16:07.376070 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:16:07.376175 | orchestrator | 2026-03-08 00:16:07.376192 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-08 00:16:07.746711 | orchestrator | ok: [testbed-manager] 2026-03-08 00:16:07.746814 | orchestrator | 2026-03-08 00:16:07.746832 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-08 00:16:07.820744 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-08 00:16:07.820842 | orchestrator | 2026-03-08 00:16:07.820858 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-08 00:16:09.857797 | orchestrator | changed: [testbed-manager] 2026-03-08 00:16:09.857937 | orchestrator | 2026-03-08 00:16:09.857966 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-08 00:16:10.669670 | orchestrator | changed: [testbed-manager] 2026-03-08 00:16:10.669766 | orchestrator | 2026-03-08 00:16:10.669777 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-08 00:16:22.994821 | orchestrator | changed: [testbed-manager] 2026-03-08 00:16:22.994945 | orchestrator | 2026-03-08 00:16:22.994972 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-08 00:16:23.048452 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:16:23.048547 | orchestrator | 2026-03-08 00:16:23.048585 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-08 00:16:23.048599 | orchestrator | 2026-03-08 00:16:23.048645 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-08 00:16:24.789394 | orchestrator | ok: [testbed-manager] 2026-03-08 00:16:24.789512 | orchestrator | 2026-03-08 00:16:24.789548 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-08 00:16:24.908798 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-08 00:16:24.908897 | orchestrator | 2026-03-08 00:16:24.908912 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-08 00:16:24.964940 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-08 00:16:24.965037 | orchestrator | 2026-03-08 00:16:24.965051 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-08 00:16:27.619870 | orchestrator | ok: [testbed-manager] 2026-03-08 00:16:27.619997 | orchestrator | 2026-03-08 00:16:27.620016 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-08 00:16:27.676239 | orchestrator | ok: [testbed-manager] 2026-03-08 00:16:27.676342 | orchestrator | 2026-03-08 00:16:27.676358 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-08 00:16:27.798328 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-08 00:16:27.798424 | orchestrator | 2026-03-08 00:16:27.798441 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-08 00:16:30.651658 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-08 00:16:30.651801 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-08 00:16:30.651821 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-08 00:16:30.651833 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-08 00:16:30.651845 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-08 00:16:30.651857 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-08 00:16:30.651868 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-08 00:16:30.651880 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-08 00:16:30.651891 | orchestrator | 2026-03-08 00:16:30.651903 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-08 00:16:31.269760 | orchestrator | changed: [testbed-manager] 2026-03-08 00:16:31.269867 | orchestrator | 2026-03-08 00:16:31.269887 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-08 00:16:31.908082 | orchestrator | changed: [testbed-manager] 2026-03-08 00:16:31.908185 | orchestrator | 2026-03-08 00:16:31.908203 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-08 00:16:31.974995 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-08 00:16:31.975093 | orchestrator | 2026-03-08 00:16:31.975108 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-08 00:16:33.164786 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-08 00:16:33.164869 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-08 00:16:33.164878 | orchestrator | 2026-03-08 00:16:33.164886 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-08 00:16:33.753798 | orchestrator | changed: [testbed-manager] 2026-03-08 00:16:33.753902 | orchestrator | 2026-03-08 00:16:33.753918 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-08 00:16:33.802559 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:16:33.802743 | orchestrator | 2026-03-08 00:16:33.802775 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-08 00:16:33.880968 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-08 00:16:33.881071 | orchestrator | 2026-03-08 00:16:33.881094 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-08 00:16:34.494955 | orchestrator | changed: [testbed-manager] 2026-03-08 00:16:34.495055 | orchestrator | 2026-03-08 00:16:34.495072 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-08 00:16:34.562850 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-08 00:16:34.562952 | orchestrator | 2026-03-08 00:16:34.562968 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-08 00:16:35.878424 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-08 00:16:35.878537 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-08 00:16:35.878563 | orchestrator | changed: [testbed-manager] 2026-03-08 00:16:35.878638 | orchestrator | 2026-03-08 00:16:35.878655 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-08 00:16:36.506112 | orchestrator | changed: [testbed-manager] 2026-03-08 00:16:36.506222 | orchestrator | 2026-03-08 00:16:36.506249 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-08 00:16:36.563362 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:16:36.563442 | orchestrator | 2026-03-08 00:16:36.563453 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-08 00:16:36.664818 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-08 00:16:36.664888 | orchestrator | 2026-03-08 00:16:36.664896 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-08 00:16:37.318289 | orchestrator | changed: [testbed-manager] 2026-03-08 00:16:37.318402 | orchestrator | 2026-03-08 00:16:37.318413 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-08 00:16:37.719158 | orchestrator | changed: [testbed-manager] 2026-03-08 00:16:37.719264 | orchestrator | 2026-03-08 00:16:37.719282 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-08 00:16:38.983484 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-08 00:16:38.983563 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-08 00:16:38.983572 | orchestrator | 2026-03-08 00:16:38.983581 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-08 00:16:39.621886 | orchestrator | changed: [testbed-manager] 2026-03-08 00:16:39.622012 | orchestrator | 2026-03-08 00:16:39.622139 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-08 00:16:40.007120 | orchestrator | ok: [testbed-manager] 2026-03-08 00:16:40.007223 | orchestrator | 2026-03-08 00:16:40.007239 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-08 00:16:40.372340 | orchestrator | changed: [testbed-manager] 2026-03-08 00:16:40.372423 | orchestrator | 2026-03-08 00:16:40.372433 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-08 00:16:40.425165 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:16:40.425256 | orchestrator | 2026-03-08 00:16:40.425272 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-08 00:16:40.497593 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-08 00:16:40.497761 | orchestrator | 2026-03-08 00:16:40.497780 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-08 00:16:40.531886 | orchestrator | ok: [testbed-manager] 2026-03-08 00:16:40.531971 | orchestrator | 2026-03-08 00:16:40.531981 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-08 00:16:42.501692 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-08 00:16:42.501796 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-08 00:16:42.501815 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-08 00:16:42.501829 | orchestrator | 2026-03-08 00:16:42.501841 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-08 00:16:43.195352 | orchestrator | changed: [testbed-manager] 2026-03-08 00:16:43.195431 | orchestrator | 2026-03-08 00:16:43.195442 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-08 00:16:43.903578 | orchestrator | changed: [testbed-manager] 2026-03-08 00:16:43.903735 | orchestrator | 2026-03-08 00:16:43.903752 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-08 00:16:44.597552 | orchestrator | changed: [testbed-manager] 2026-03-08 00:16:44.597682 | orchestrator | 2026-03-08 00:16:44.597699 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-08 00:16:44.678944 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-08 00:16:44.679066 | orchestrator | 2026-03-08 00:16:44.679097 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-08 00:16:44.725083 | orchestrator | ok: [testbed-manager] 2026-03-08 00:16:44.725187 | orchestrator | 2026-03-08 00:16:44.725203 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-08 00:16:45.422421 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-08 00:16:45.422568 | orchestrator | 2026-03-08 00:16:45.422592 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-08 00:16:45.515184 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-08 00:16:45.515279 | orchestrator | 2026-03-08 00:16:45.515293 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-08 00:16:46.234820 | orchestrator | changed: [testbed-manager] 2026-03-08 00:16:46.234927 | orchestrator | 2026-03-08 00:16:46.234945 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-08 00:16:46.829427 | orchestrator | ok: [testbed-manager] 2026-03-08 00:16:46.829530 | orchestrator | 2026-03-08 00:16:46.829549 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-08 00:16:46.884328 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:16:46.884443 | orchestrator | 2026-03-08 00:16:46.884467 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-08 00:16:46.939141 | orchestrator | ok: [testbed-manager] 2026-03-08 00:16:46.939242 | orchestrator | 2026-03-08 00:16:46.939257 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-08 00:16:47.779849 | orchestrator | changed: [testbed-manager] 2026-03-08 00:16:47.779929 | orchestrator | 2026-03-08 00:16:47.779939 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-08 00:18:06.426246 | orchestrator | changed: [testbed-manager] 2026-03-08 00:18:06.426374 | orchestrator | 2026-03-08 00:18:06.426390 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-08 00:18:07.349274 | orchestrator | ok: [testbed-manager] 2026-03-08 00:18:07.349363 | orchestrator | 2026-03-08 00:18:07.349375 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-08 00:18:07.401712 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:18:07.401808 | orchestrator | 2026-03-08 00:18:07.401824 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-08 00:18:09.873433 | orchestrator | changed: [testbed-manager] 2026-03-08 00:18:09.873534 | orchestrator | 2026-03-08 00:18:09.873557 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-08 00:18:09.915805 | orchestrator | ok: [testbed-manager] 2026-03-08 00:18:09.915885 | orchestrator | 2026-03-08 00:18:09.915901 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-08 00:18:09.915913 | orchestrator | 2026-03-08 00:18:09.915925 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-08 00:18:10.032349 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:18:10.032431 | orchestrator | 2026-03-08 00:18:10.032446 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-08 00:19:10.084920 | orchestrator | Pausing for 60 seconds 2026-03-08 00:19:10.085042 | orchestrator | changed: [testbed-manager] 2026-03-08 00:19:10.085059 | orchestrator | 2026-03-08 00:19:10.085073 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-08 00:19:13.048258 | orchestrator | changed: [testbed-manager] 2026-03-08 00:19:13.048349 | orchestrator | 2026-03-08 00:19:13.048365 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-08 00:19:54.500968 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-08 00:19:54.501076 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-08 00:19:54.501092 | orchestrator | changed: [testbed-manager] 2026-03-08 00:19:54.501106 | orchestrator | 2026-03-08 00:19:54.501140 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-08 00:20:04.844980 | orchestrator | changed: [testbed-manager] 2026-03-08 00:20:04.845092 | orchestrator | 2026-03-08 00:20:04.845108 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-08 00:20:04.920924 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-08 00:20:04.921023 | orchestrator | 2026-03-08 00:20:04.921039 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-08 00:20:04.921051 | orchestrator | 2026-03-08 00:20:04.921061 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-08 00:20:04.968881 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:20:04.968958 | orchestrator | 2026-03-08 00:20:04.968967 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-08 00:20:05.042487 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-08 00:20:05.042583 | orchestrator | 2026-03-08 00:20:05.042599 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-08 00:20:05.802961 | orchestrator | changed: [testbed-manager] 2026-03-08 00:20:05.803066 | orchestrator | 2026-03-08 00:20:05.803082 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-08 00:20:09.097372 | orchestrator | ok: [testbed-manager] 2026-03-08 00:20:09.097461 | orchestrator | 2026-03-08 00:20:09.097478 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-08 00:20:09.167095 | orchestrator | ok: [testbed-manager] => { 2026-03-08 00:20:09.167175 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-08 00:20:09.167190 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-08 00:20:09.167203 | orchestrator | "Checking running containers against expected versions...", 2026-03-08 00:20:09.167215 | orchestrator | "", 2026-03-08 00:20:09.167226 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-08 00:20:09.167237 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-08 00:20:09.167249 | orchestrator | " Enabled: true", 2026-03-08 00:20:09.167260 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-08 00:20:09.167271 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:20:09.167282 | orchestrator | "", 2026-03-08 00:20:09.167293 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-08 00:20:09.167304 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-08 00:20:09.167315 | orchestrator | " Enabled: true", 2026-03-08 00:20:09.167347 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-08 00:20:09.167359 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:20:09.167369 | orchestrator | "", 2026-03-08 00:20:09.167380 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-08 00:20:09.167391 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-08 00:20:09.167402 | orchestrator | " Enabled: true", 2026-03-08 00:20:09.167413 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-08 00:20:09.167424 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:20:09.167435 | orchestrator | "", 2026-03-08 00:20:09.167445 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-08 00:20:09.167457 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-08 00:20:09.167467 | orchestrator | " Enabled: true", 2026-03-08 00:20:09.167478 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-08 00:20:09.167489 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:20:09.167500 | orchestrator | "", 2026-03-08 00:20:09.167510 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-08 00:20:09.167523 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-08 00:20:09.167534 | orchestrator | " Enabled: true", 2026-03-08 00:20:09.167545 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-08 00:20:09.167556 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:20:09.167566 | orchestrator | "", 2026-03-08 00:20:09.167577 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-08 00:20:09.167588 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-08 00:20:09.167599 | orchestrator | " Enabled: true", 2026-03-08 00:20:09.167645 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-08 00:20:09.167659 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:20:09.167672 | orchestrator | "", 2026-03-08 00:20:09.167686 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-08 00:20:09.167698 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-08 00:20:09.167711 | orchestrator | " Enabled: true", 2026-03-08 00:20:09.167724 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-08 00:20:09.167737 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:20:09.167750 | orchestrator | "", 2026-03-08 00:20:09.167763 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-08 00:20:09.167776 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-08 00:20:09.167788 | orchestrator | " Enabled: true", 2026-03-08 00:20:09.167801 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-08 00:20:09.167814 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:20:09.167826 | orchestrator | "", 2026-03-08 00:20:09.167838 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-08 00:20:09.167851 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-08 00:20:09.167864 | orchestrator | " Enabled: true", 2026-03-08 00:20:09.167877 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-08 00:20:09.167890 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:20:09.167902 | orchestrator | "", 2026-03-08 00:20:09.167914 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-08 00:20:09.167928 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-08 00:20:09.167940 | orchestrator | " Enabled: true", 2026-03-08 00:20:09.167952 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-08 00:20:09.167965 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:20:09.167978 | orchestrator | "", 2026-03-08 00:20:09.167991 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-08 00:20:09.168004 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-08 00:20:09.168016 | orchestrator | " Enabled: true", 2026-03-08 00:20:09.168037 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-08 00:20:09.168048 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:20:09.168059 | orchestrator | "", 2026-03-08 00:20:09.168069 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-08 00:20:09.168080 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-08 00:20:09.168091 | orchestrator | " Enabled: true", 2026-03-08 00:20:09.168102 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-08 00:20:09.168112 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:20:09.168123 | orchestrator | "", 2026-03-08 00:20:09.168135 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-08 00:20:09.168146 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-08 00:20:09.168156 | orchestrator | " Enabled: true", 2026-03-08 00:20:09.168167 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-08 00:20:09.168178 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:20:09.168189 | orchestrator | "", 2026-03-08 00:20:09.168200 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-08 00:20:09.168210 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-08 00:20:09.168221 | orchestrator | " Enabled: true", 2026-03-08 00:20:09.168232 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-08 00:20:09.168258 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:20:09.168269 | orchestrator | "", 2026-03-08 00:20:09.168280 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-08 00:20:09.168291 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-08 00:20:09.168302 | orchestrator | " Enabled: true", 2026-03-08 00:20:09.168321 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-08 00:20:09.168332 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:20:09.168343 | orchestrator | "", 2026-03-08 00:20:09.168354 | orchestrator | "=== Summary ===", 2026-03-08 00:20:09.168365 | orchestrator | "Errors (version mismatches): 0", 2026-03-08 00:20:09.168376 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-08 00:20:09.168387 | orchestrator | "", 2026-03-08 00:20:09.168397 | orchestrator | "✅ All running containers match expected versions!" 2026-03-08 00:20:09.168408 | orchestrator | ] 2026-03-08 00:20:09.168419 | orchestrator | } 2026-03-08 00:20:09.168430 | orchestrator | 2026-03-08 00:20:09.168441 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-08 00:20:09.224110 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:20:09.224224 | orchestrator | 2026-03-08 00:20:09.224242 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:20:09.224256 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-08 00:20:09.224268 | orchestrator | 2026-03-08 00:20:09.321588 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-08 00:20:09.321721 | orchestrator | + deactivate 2026-03-08 00:20:09.321737 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-08 00:20:09.321750 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-08 00:20:09.321761 | orchestrator | + export PATH 2026-03-08 00:20:09.321773 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-08 00:20:09.321784 | orchestrator | + '[' -n '' ']' 2026-03-08 00:20:09.321795 | orchestrator | + hash -r 2026-03-08 00:20:09.321806 | orchestrator | + '[' -n '' ']' 2026-03-08 00:20:09.321817 | orchestrator | + unset VIRTUAL_ENV 2026-03-08 00:20:09.321828 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-08 00:20:09.321839 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-08 00:20:09.321850 | orchestrator | + unset -f deactivate 2026-03-08 00:20:09.321862 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-08 00:20:09.330644 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-08 00:20:09.330688 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-08 00:20:09.330696 | orchestrator | + local max_attempts=60 2026-03-08 00:20:09.330704 | orchestrator | + local name=ceph-ansible 2026-03-08 00:20:09.330726 | orchestrator | + local attempt_num=1 2026-03-08 00:20:09.331420 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:20:09.362784 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-08 00:20:09.362853 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-08 00:20:09.362866 | orchestrator | + local max_attempts=60 2026-03-08 00:20:09.362878 | orchestrator | + local name=kolla-ansible 2026-03-08 00:20:09.362890 | orchestrator | + local attempt_num=1 2026-03-08 00:20:09.363752 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-08 00:20:09.396818 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-08 00:20:09.396894 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-08 00:20:09.396911 | orchestrator | + local max_attempts=60 2026-03-08 00:20:09.396926 | orchestrator | + local name=osism-ansible 2026-03-08 00:20:09.396940 | orchestrator | + local attempt_num=1 2026-03-08 00:20:09.397772 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-08 00:20:09.435782 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-08 00:20:09.435831 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-08 00:20:09.435837 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-08 00:20:10.125046 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-08 00:20:10.293586 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-08 00:20:10.293734 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-03-08 00:20:10.293751 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-03-08 00:20:10.293763 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-03-08 00:20:10.293776 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up About a minute (healthy) 8000/tcp 2026-03-08 00:20:10.293787 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up About a minute (healthy) 2026-03-08 00:20:10.293822 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up About a minute (healthy) 2026-03-08 00:20:10.293842 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up 57 seconds (healthy) 2026-03-08 00:20:10.293861 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up About a minute (healthy) 2026-03-08 00:20:10.293880 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up About a minute (healthy) 3306/tcp 2026-03-08 00:20:10.293899 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up About a minute (healthy) 2026-03-08 00:20:10.293917 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up About a minute (healthy) 6379/tcp 2026-03-08 00:20:10.293937 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-03-08 00:20:10.293983 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-03-08 00:20:10.294003 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-03-08 00:20:10.294063 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up About a minute (healthy) 2026-03-08 00:20:10.300295 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-08 00:20:10.349099 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-08 00:20:10.349195 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-08 00:20:10.353514 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-08 00:20:22.640809 | orchestrator | 2026-03-08 00:20:22 | INFO  | Task 1568f24e-e3a6-48bd-ad0c-6cfb8a59b86c (resolvconf) was prepared for execution. 2026-03-08 00:20:22.640919 | orchestrator | 2026-03-08 00:20:22 | INFO  | It takes a moment until task 1568f24e-e3a6-48bd-ad0c-6cfb8a59b86c (resolvconf) has been started and output is visible here. 2026-03-08 00:20:36.629168 | orchestrator | 2026-03-08 00:20:36.629285 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-08 00:20:36.629302 | orchestrator | 2026-03-08 00:20:36.629315 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-08 00:20:36.629327 | orchestrator | Sunday 08 March 2026 00:20:26 +0000 (0:00:00.133) 0:00:00.133 ********** 2026-03-08 00:20:36.629339 | orchestrator | ok: [testbed-manager] 2026-03-08 00:20:36.629352 | orchestrator | 2026-03-08 00:20:36.629364 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-08 00:20:36.629376 | orchestrator | Sunday 08 March 2026 00:20:30 +0000 (0:00:03.630) 0:00:03.764 ********** 2026-03-08 00:20:36.629388 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:20:36.629400 | orchestrator | 2026-03-08 00:20:36.629412 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-08 00:20:36.629423 | orchestrator | Sunday 08 March 2026 00:20:30 +0000 (0:00:00.069) 0:00:03.833 ********** 2026-03-08 00:20:36.629435 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-08 00:20:36.629448 | orchestrator | 2026-03-08 00:20:36.629459 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-08 00:20:36.629470 | orchestrator | Sunday 08 March 2026 00:20:30 +0000 (0:00:00.081) 0:00:03.915 ********** 2026-03-08 00:20:36.629482 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-08 00:20:36.629494 | orchestrator | 2026-03-08 00:20:36.629505 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-08 00:20:36.629537 | orchestrator | Sunday 08 March 2026 00:20:30 +0000 (0:00:00.083) 0:00:03.998 ********** 2026-03-08 00:20:36.629549 | orchestrator | ok: [testbed-manager] 2026-03-08 00:20:36.629561 | orchestrator | 2026-03-08 00:20:36.629572 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-08 00:20:36.629583 | orchestrator | Sunday 08 March 2026 00:20:31 +0000 (0:00:00.916) 0:00:04.915 ********** 2026-03-08 00:20:36.629636 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:20:36.629649 | orchestrator | 2026-03-08 00:20:36.629660 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-08 00:20:36.629671 | orchestrator | Sunday 08 March 2026 00:20:31 +0000 (0:00:00.056) 0:00:04.972 ********** 2026-03-08 00:20:36.629682 | orchestrator | ok: [testbed-manager] 2026-03-08 00:20:36.629693 | orchestrator | 2026-03-08 00:20:36.629706 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-08 00:20:36.629743 | orchestrator | Sunday 08 March 2026 00:20:31 +0000 (0:00:00.433) 0:00:05.405 ********** 2026-03-08 00:20:36.629756 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:20:36.629768 | orchestrator | 2026-03-08 00:20:36.629780 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-08 00:20:36.629794 | orchestrator | Sunday 08 March 2026 00:20:31 +0000 (0:00:00.071) 0:00:05.477 ********** 2026-03-08 00:20:36.629807 | orchestrator | changed: [testbed-manager] 2026-03-08 00:20:36.629820 | orchestrator | 2026-03-08 00:20:36.629833 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-08 00:20:36.629845 | orchestrator | Sunday 08 March 2026 00:20:32 +0000 (0:00:00.501) 0:00:05.979 ********** 2026-03-08 00:20:36.629857 | orchestrator | changed: [testbed-manager] 2026-03-08 00:20:36.629869 | orchestrator | 2026-03-08 00:20:36.629881 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-08 00:20:36.629894 | orchestrator | Sunday 08 March 2026 00:20:33 +0000 (0:00:00.957) 0:00:06.936 ********** 2026-03-08 00:20:36.629906 | orchestrator | ok: [testbed-manager] 2026-03-08 00:20:36.629918 | orchestrator | 2026-03-08 00:20:36.629931 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-08 00:20:36.629943 | orchestrator | Sunday 08 March 2026 00:20:35 +0000 (0:00:01.850) 0:00:08.787 ********** 2026-03-08 00:20:36.629956 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-08 00:20:36.629969 | orchestrator | 2026-03-08 00:20:36.629981 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-08 00:20:36.629994 | orchestrator | Sunday 08 March 2026 00:20:35 +0000 (0:00:00.083) 0:00:08.870 ********** 2026-03-08 00:20:36.630010 | orchestrator | changed: [testbed-manager] 2026-03-08 00:20:36.630170 | orchestrator | 2026-03-08 00:20:36.630190 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:20:36.630209 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-08 00:20:36.630225 | orchestrator | 2026-03-08 00:20:36.630241 | orchestrator | 2026-03-08 00:20:36.630258 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:20:36.630275 | orchestrator | Sunday 08 March 2026 00:20:36 +0000 (0:00:01.106) 0:00:09.977 ********** 2026-03-08 00:20:36.630293 | orchestrator | =============================================================================== 2026-03-08 00:20:36.630309 | orchestrator | Gathering Facts --------------------------------------------------------- 3.63s 2026-03-08 00:20:36.630325 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.85s 2026-03-08 00:20:36.630342 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.11s 2026-03-08 00:20:36.630358 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 0.96s 2026-03-08 00:20:36.630374 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.92s 2026-03-08 00:20:36.630391 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.50s 2026-03-08 00:20:36.630434 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.43s 2026-03-08 00:20:36.630453 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-03-08 00:20:36.630472 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-03-08 00:20:36.630490 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-03-08 00:20:36.630507 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2026-03-08 00:20:36.630526 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-03-08 00:20:36.630544 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-03-08 00:20:36.893727 | orchestrator | + osism apply sshconfig 2026-03-08 00:20:49.091767 | orchestrator | 2026-03-08 00:20:49 | INFO  | Task 333e8651-fee4-45df-8de3-7a475077410f (sshconfig) was prepared for execution. 2026-03-08 00:20:49.091891 | orchestrator | 2026-03-08 00:20:49 | INFO  | It takes a moment until task 333e8651-fee4-45df-8de3-7a475077410f (sshconfig) has been started and output is visible here. 2026-03-08 00:20:59.829947 | orchestrator | 2026-03-08 00:20:59.830108 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-08 00:20:59.830129 | orchestrator | 2026-03-08 00:20:59.830141 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-08 00:20:59.830153 | orchestrator | Sunday 08 March 2026 00:20:52 +0000 (0:00:00.115) 0:00:00.115 ********** 2026-03-08 00:20:59.830164 | orchestrator | ok: [testbed-manager] 2026-03-08 00:20:59.830177 | orchestrator | 2026-03-08 00:20:59.830209 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-08 00:20:59.830221 | orchestrator | Sunday 08 March 2026 00:20:53 +0000 (0:00:00.512) 0:00:00.627 ********** 2026-03-08 00:20:59.830232 | orchestrator | changed: [testbed-manager] 2026-03-08 00:20:59.830244 | orchestrator | 2026-03-08 00:20:59.830256 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-08 00:20:59.830267 | orchestrator | Sunday 08 March 2026 00:20:53 +0000 (0:00:00.412) 0:00:01.039 ********** 2026-03-08 00:20:59.830278 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-08 00:20:59.830289 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-08 00:20:59.830301 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-08 00:20:59.830312 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-08 00:20:59.830323 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-08 00:20:59.830334 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-08 00:20:59.830345 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-08 00:20:59.830356 | orchestrator | 2026-03-08 00:20:59.830367 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-08 00:20:59.830378 | orchestrator | Sunday 08 March 2026 00:20:58 +0000 (0:00:05.173) 0:00:06.213 ********** 2026-03-08 00:20:59.830389 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:20:59.830400 | orchestrator | 2026-03-08 00:20:59.830411 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-08 00:20:59.830422 | orchestrator | Sunday 08 March 2026 00:20:59 +0000 (0:00:00.070) 0:00:06.283 ********** 2026-03-08 00:20:59.830433 | orchestrator | changed: [testbed-manager] 2026-03-08 00:20:59.830444 | orchestrator | 2026-03-08 00:20:59.830455 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:20:59.830467 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:20:59.830479 | orchestrator | 2026-03-08 00:20:59.830493 | orchestrator | 2026-03-08 00:20:59.830506 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:20:59.830519 | orchestrator | Sunday 08 March 2026 00:20:59 +0000 (0:00:00.554) 0:00:06.838 ********** 2026-03-08 00:20:59.830532 | orchestrator | =============================================================================== 2026-03-08 00:20:59.830545 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.17s 2026-03-08 00:20:59.830558 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.55s 2026-03-08 00:20:59.830570 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.51s 2026-03-08 00:20:59.830613 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.41s 2026-03-08 00:20:59.830633 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2026-03-08 00:21:00.101188 | orchestrator | + osism apply known-hosts 2026-03-08 00:21:12.133321 | orchestrator | 2026-03-08 00:21:12 | INFO  | Task 1d0b2f4f-fee6-4ca9-b288-5ca5fdc51e35 (known-hosts) was prepared for execution. 2026-03-08 00:21:12.133433 | orchestrator | 2026-03-08 00:21:12 | INFO  | It takes a moment until task 1d0b2f4f-fee6-4ca9-b288-5ca5fdc51e35 (known-hosts) has been started and output is visible here. 2026-03-08 00:21:28.516804 | orchestrator | 2026-03-08 00:21:28.516915 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-08 00:21:28.516935 | orchestrator | 2026-03-08 00:21:28.516951 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-08 00:21:28.516967 | orchestrator | Sunday 08 March 2026 00:21:16 +0000 (0:00:00.158) 0:00:00.158 ********** 2026-03-08 00:21:28.516982 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-08 00:21:28.516998 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-08 00:21:28.517012 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-08 00:21:28.517028 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-08 00:21:28.517042 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-08 00:21:28.517056 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-08 00:21:28.517070 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-08 00:21:28.517085 | orchestrator | 2026-03-08 00:21:28.517099 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-08 00:21:28.517115 | orchestrator | Sunday 08 March 2026 00:21:22 +0000 (0:00:05.899) 0:00:06.058 ********** 2026-03-08 00:21:28.517131 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-08 00:21:28.517147 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-08 00:21:28.517162 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-08 00:21:28.517176 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-08 00:21:28.517191 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-08 00:21:28.517205 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-08 00:21:28.517230 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-08 00:21:28.517245 | orchestrator | 2026-03-08 00:21:28.517260 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-08 00:21:28.517275 | orchestrator | Sunday 08 March 2026 00:21:22 +0000 (0:00:00.154) 0:00:06.212 ********** 2026-03-08 00:21:28.517290 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD0iQswD5ewSuvZMj8rbJ7QH6Z+Vjb5ZnBZ9I1OgA5bVvoe4UiI+vHz4vbeCINHEwCxLT1YoHcBrhd17eJsVmsY=) 2026-03-08 00:21:28.517314 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAxyQFZ5i48XAzVV77M+MBIbjQUzGiqK8HbuobHDUOx1FbNVojf1lIgRp751kvnh3Vd+K6p5TE0nOC6PDy64JNh6cx3lGOycUrlAhjQtQUdPlz+KabttJKq7o5dN6lKLW36W49apNf4Tvfcp2usnYeXCiQo5w+fPS4uhnBHTRN0Gq/QNEDZMht3VLu+SDvrPT3Lb4C4/l3C+b56GafHFi32JfxCk17JCspCsr87D9GvXVadwQmQr8E/eCAOFmaZcH4rRILwikrtMC131brs6wUiugwdcxmeGSwjYm/efZVlRCUHbGUogOus2cO4NNE2M3AoPqzVWrG1yUZAQziZGJ0XLCCm26ldUKbegN9CESw5IHFLpvQOX6FbsWVUbw9WGleyqoKrUCBSWtqOyIktYHDLPUCp9xdG+Rs2F4uFT1DRcjN2hlfMDSTRJspVdjbz98V8zmQIaPTgaxFkHzepwN3rCwkqQ2YO4KEedCBWNJwUvokF8axe1SmhxxRbK4dPQc=) 2026-03-08 00:21:28.517359 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGi/WVhm6w2qzTqq3qxHwrg1t6pKDluap/4TXqizS/O5) 2026-03-08 00:21:28.517376 | orchestrator | 2026-03-08 00:21:28.517392 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-08 00:21:28.517409 | orchestrator | Sunday 08 March 2026 00:21:23 +0000 (0:00:01.142) 0:00:07.354 ********** 2026-03-08 00:21:28.517444 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsWWCFaW79hTEsFRUkGzIABQM5o4aFqmDmc/fJwId+12QFzdayRbInGFMIUJLfZOkX3EEx/AeMTE1q6nW5TZ46tDiNKoekb4Cfn+MyuU0BaydIqzuDFAgHu3SBRwCjjnsM9MDfN2lin64uUTjoGLSO5kvlB8ih1jIegkw2rXjKBZgbAWyYuFhVHSfv7c5qjXji/AgfXVxoIhu6WmDSlYxJxrw/ua0I03EVpu1mclPEgH4Z6L6Ob7Sm4ig1vXX/v/4IypDyMO/okbsgQ/sQS05LLkVY2+EPoijm+MNToWaYuFEkK1RJJ1lIxKJrUMjeN6DXRrsLKlAa/6lkNM5a3mwqsLE3vm6LiqJMJEpNo3EY60Aw78QHR5V7nJdfZR0eYlQwGHqLz2YIXq/lq0fAS40OkfByLHoO+xskfuKCWJX9wEci/WLykHgeM3Sq6VtNFjPP0WYbG5lR0hljvoIbTp1DvOWyxjkJMSBhpOJdoHuEiL1kVL0bzX9yJA505etCfAM=) 2026-03-08 00:21:28.517461 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKqeW/WTKjFPIdoZEdKXLSZbm56HQcxdsZkb0EuvlPzrhWfgxAc76PZ2e3C0s2ronpfeQZY+CRy2RqsfQwfpyZo=) 2026-03-08 00:21:28.517477 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFBgpVgEp8oBjSiEq9SfLqNudOJWTZ3YCsEBrYvSawR9) 2026-03-08 00:21:28.517493 | orchestrator | 2026-03-08 00:21:28.517508 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-08 00:21:28.517524 | orchestrator | Sunday 08 March 2026 00:21:24 +0000 (0:00:01.038) 0:00:08.393 ********** 2026-03-08 00:21:28.517538 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAZ+zvGlnD5QLVoiX41/mT6+NC4KNCdZuzQgIfkGC6x/) 2026-03-08 00:21:28.517555 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDlnvZW9FvQ4TvgLW+3szoNrkref4vAj7ESMwyadQrWCIPcYYiKrwT0fPPhyz4WaF29Jx3x+IVSSQne4heWae0smOt0d1gdCGuN0x/ebxK/gT/30x5VIam9Q2/1EoyoEcIGWHiB+/U4qoR6WuRz108Iqt1JvATuUOKcJlAz35fwKuw0yulkdZhm6E2qI+bnjYY5nVJcO0Pky0wOlPVfQDTJuAwZPhnFU0Jl0TGcywoQPDpmiyejCiZEPKNzGF1kMkVkPUT1+d3slb+7bfLb7hLOnBN4TH2JFjALf9mZhVJ3w3A+xA6yj/WSbVE6vmSeTiF58O9ReFT/eur+FjBNqTVECckL7609undKuI+SH5rOHDz191WnJ/1ZRvcK8zoJrp7OJozPgYxLydHNsPi4fkbbZip72pLHv2Woqwt332zPeJ2nIBPew2m9N7kdbSVC3iLGGyIMOzlLQmBUgs7sFEUY8arvwizfrEbb94Uhro6qwhRQhYujwzON2JbAuPIZ6Hc=) 2026-03-08 00:21:28.517633 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEdnNw7u4S++JM+o9xQbDSAxPzLwVJMNTtOF2g4B6pe2vZQPfHOsanXqZTfznteTdndyOoRJKF/31SJodrRvQEI=) 2026-03-08 00:21:28.517651 | orchestrator | 2026-03-08 00:21:28.517667 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-08 00:21:28.517683 | orchestrator | Sunday 08 March 2026 00:21:25 +0000 (0:00:01.020) 0:00:09.413 ********** 2026-03-08 00:21:28.517699 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDU0k2BgothrjUmjN3uu5suZfBBViGy+TKb/p7aUUaI3xS6zGHFigKo+an1IqCRwZ8J7Wsv27HoXr/KE4Nj1kivhNnZZCrs92BoQwVmhQn+ot8lzR8ZSF6IBz185zvtqpfVPjHiDtHeVylOdAvOUv2w6o6sw09LGOBZ0U09n6Hk7uzEVsL2cJdJGFrVRhAVtN4z9f8XfgILkEyFj+wTPwMZiadIaOhPEvWWrNgMAj2ZuPbCd+ilAQnIzCUbfA05cFbPg8BfQLQ6Z18u6SKbiLFSWihKhl1S7cTwu3a4NmKiHtNYxDxvFxZ43QKbw6UY7JQLGbBQ3g3oCaICfAN7t/I/xa4VlfkWZaHuEm4CLyrNUTkoWQFDpEe7pXz6QaN2MaMM9DN+bhMsKQjFjQ8ZFjU7JJTHV4ZSpzU5kDU7i3SCq46/fjxERXuI34e6cU3I3fDccBsCkd6e4TexCtzGG2GRCiz3DLUgWTn1fgYuSoQWj4rkNmm1c+psY5fdxRwmnUE=) 2026-03-08 00:21:28.517715 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDwT2CgOGxbpb5cWE0F5RtiYwHUHia5FtiVpj5E5D4TjkKDk7xL9fLUZB6rj8JOQnC3gDVZL7Bm10WAwbGKjW1g=) 2026-03-08 00:21:28.517740 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEdpY4ruxWeoOKbek+MFZS9S3btj91G85o6fIhQ326JZ) 2026-03-08 00:21:28.517755 | orchestrator | 2026-03-08 00:21:28.517769 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-08 00:21:28.517783 | orchestrator | Sunday 08 March 2026 00:21:26 +0000 (0:00:01.006) 0:00:10.419 ********** 2026-03-08 00:21:28.517878 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZVO9mqI2PomnxmuRzOsSZrvHiXhpPedKz6iXGM45WB8OIsFfBCrwpX/DqizfcMwr88qiP4fNHR5Oq7tWic8++nOft1nzEnCH9afzuRIlk4aoTj+COo7tSjFFoRwgWwnfgnGGo9GmaifgjvFoB4oXLIBmV6oRuXUFj9UA2aBl8WenqG8tvLXftG7C8NuYeoc2Bp+tuUA1rdFO91bqiRZT1l++OBYBZMYGVBJEr1buW3wNzR8TmiJ54eEmQ2/sGcl8Sv3Ix8TRJfM+DnKli5Tl55gHdaYsMPbhMGyvbw/2OaStA11FjjOnTePfiJvlj5BVp2FyCsr9dM1DtgleSQ3YrZ1nm5gwahdhfhpK8TAMUeAjkHldQQxbI3XQjEe6yihFmuoztvSBZ/V5cDvjJY0m2sS/hwqh/02q7spoRzQKQLfnSx5odLoEYKjyR8MIDQoaNFAeKAtkExK9HjldeEOQ0ryGfYN6Vs8RNuCEyfw4pTnGEcOJ8GZEcsR1e9FdBOqU=) 2026-03-08 00:21:28.517894 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFo6Os4GH5QXCYB2+wJnaorwAwsY8QcCk+AFvojEE6XM1VlOBcbpSTB7KQ/rX0s02jEwp7knzKyYd5NfzCa1iK4=) 2026-03-08 00:21:28.517908 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKVKop9D8ZMbe2x0utE8g0pVte26gtkHp0Quw+UFJN8/) 2026-03-08 00:21:28.517923 | orchestrator | 2026-03-08 00:21:28.517938 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-08 00:21:28.517952 | orchestrator | Sunday 08 March 2026 00:21:27 +0000 (0:00:01.018) 0:00:11.438 ********** 2026-03-08 00:21:28.517976 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGVcITSutWWz69hIszH6XJSNBT/8zglXvNc1kXSAXIgY) 2026-03-08 00:21:39.091200 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC02NoO4H1QqFigFlEV94T5P/EMF82OhdBVE9mNsggMvgKTFinUudwQvzHfSl00C2+HN48ind+IN3R3Uwj3Ac2Ybu6E/Ocm9s0DQdHQIGUIRfG9Owm0spj9TQ+fYFIxv1stZAWZqXLrx9zRbvJkOWcPPxo+GXiFYcN3AROYtnS2ZufBhCdbkGt9aPwtsnYWn2243JrpBfQY/TGCZrW+UsNFMkxS/ro9/k7f2aTRHcw/Nif+9yNc2ErVexUN4QgkcMMpG4HEWBU69HwisTbig3zC9EDAbQstrhbOEmJTAgb033x6HUDcYJq9qLCxc4Lgy7zHn9KNsklpNl9njYyLdXT6qeksnWdQFOEKtfkPhRlIx4PG2nWjzbThQP+/INl4rpb+zAnIYvl6Is2dBhPn9OCGAeccyXFe/CC79U+pctRO21HTqJm9YmQ+k1NIOoiTt4MZyqdSPwikQ0zsiKaVQE+WglGHumCBDTtTGQUmqFioPCvWjuQTwiR1PiNUEasi9rc=) 2026-03-08 00:21:39.091293 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJd2Bngxn0fghdI794EXIVeyd8QjN94uLIfpqOanWBarWxVpet6LRPQMY5tcN6/MQDk1Tgelth59g/lupRxpekg=) 2026-03-08 00:21:39.091306 | orchestrator | 2026-03-08 00:21:39.091315 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-08 00:21:39.091324 | orchestrator | Sunday 08 March 2026 00:21:28 +0000 (0:00:01.013) 0:00:12.451 ********** 2026-03-08 00:21:39.091333 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCukxM9udqiMD0CcbDHF1JxICPqgPNsWegbuV+YLOwoVKmGierB441prTTjMWwSjmcWdpg/987UMKm59v5mW/KYjZnI+PqGRo7P63EUdeJqio22u+QPW9d7QLxYQqtlf2aQWPju2ROJgW3to7LELmwV8SWI/kxqyRBaG/1npz7jbQnwrb4E8hqWoev3qNOVecYbopZrL5X3c6Ve9NFWSmbUuutI/eKDvcV+g0dyzo/gpyiww6b6YQf69ttHUBvBrT5cwy16oB+a4oGwuCxpyM1N85Jg8MgY++UJmQzwyBvLy6GWzz8n5D2/z4AZ/DYyjB4aVAimvo3AL9kCXTt7ZUVqc/1sxZF9V+Lbt9U6neAb/Nq1OMx1ax6jHcG7uG89sGHc7LV6UQs8r3JGZX/LvD6ZAwlepO+VP9fDVQE8x0px3Z+vQT4JFMrMfOnucy2ZvGw16MENAyE9OwSj7JMqnVKalRXdNKh3GRa0UTs0LMZlmQZV76aDpg5a+bEkMvUCmM0=) 2026-03-08 00:21:39.091342 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP2RwD3nHL/RhXEvFKYXvmb5HwnnrIfAYGDB8EiCXKJk) 2026-03-08 00:21:39.091351 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHW7ARLmpe6MHWRAYeI4WOfFbRLiY7WwdlqOQ2GZvfxvoBW+hqIHtW/Gwa48Hchc3HqyUbQBbUT3zqTT34NHt5A=) 2026-03-08 00:21:39.091375 | orchestrator | 2026-03-08 00:21:39.091383 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-08 00:21:39.091392 | orchestrator | Sunday 08 March 2026 00:21:29 +0000 (0:00:01.017) 0:00:13.468 ********** 2026-03-08 00:21:39.091401 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-08 00:21:39.091409 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-08 00:21:39.091417 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-08 00:21:39.091425 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-08 00:21:39.091433 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-08 00:21:39.091441 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-08 00:21:39.091448 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-08 00:21:39.091456 | orchestrator | 2026-03-08 00:21:39.091464 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-08 00:21:39.091473 | orchestrator | Sunday 08 March 2026 00:21:34 +0000 (0:00:05.248) 0:00:18.717 ********** 2026-03-08 00:21:39.091482 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-08 00:21:39.091492 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-08 00:21:39.091500 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-08 00:21:39.091508 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-08 00:21:39.091516 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-08 00:21:39.091524 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-08 00:21:39.091532 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-08 00:21:39.091540 | orchestrator | 2026-03-08 00:21:39.091588 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-08 00:21:39.091602 | orchestrator | Sunday 08 March 2026 00:21:34 +0000 (0:00:00.167) 0:00:18.884 ********** 2026-03-08 00:21:39.091615 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGi/WVhm6w2qzTqq3qxHwrg1t6pKDluap/4TXqizS/O5) 2026-03-08 00:21:39.091629 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAxyQFZ5i48XAzVV77M+MBIbjQUzGiqK8HbuobHDUOx1FbNVojf1lIgRp751kvnh3Vd+K6p5TE0nOC6PDy64JNh6cx3lGOycUrlAhjQtQUdPlz+KabttJKq7o5dN6lKLW36W49apNf4Tvfcp2usnYeXCiQo5w+fPS4uhnBHTRN0Gq/QNEDZMht3VLu+SDvrPT3Lb4C4/l3C+b56GafHFi32JfxCk17JCspCsr87D9GvXVadwQmQr8E/eCAOFmaZcH4rRILwikrtMC131brs6wUiugwdcxmeGSwjYm/efZVlRCUHbGUogOus2cO4NNE2M3AoPqzVWrG1yUZAQziZGJ0XLCCm26ldUKbegN9CESw5IHFLpvQOX6FbsWVUbw9WGleyqoKrUCBSWtqOyIktYHDLPUCp9xdG+Rs2F4uFT1DRcjN2hlfMDSTRJspVdjbz98V8zmQIaPTgaxFkHzepwN3rCwkqQ2YO4KEedCBWNJwUvokF8axe1SmhxxRbK4dPQc=) 2026-03-08 00:21:39.091649 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD0iQswD5ewSuvZMj8rbJ7QH6Z+Vjb5ZnBZ9I1OgA5bVvoe4UiI+vHz4vbeCINHEwCxLT1YoHcBrhd17eJsVmsY=) 2026-03-08 00:21:39.091663 | orchestrator | 2026-03-08 00:21:39.091670 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-08 00:21:39.091678 | orchestrator | Sunday 08 March 2026 00:21:35 +0000 (0:00:01.039) 0:00:19.923 ********** 2026-03-08 00:21:39.091686 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKqeW/WTKjFPIdoZEdKXLSZbm56HQcxdsZkb0EuvlPzrhWfgxAc76PZ2e3C0s2ronpfeQZY+CRy2RqsfQwfpyZo=) 2026-03-08 00:21:39.091694 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsWWCFaW79hTEsFRUkGzIABQM5o4aFqmDmc/fJwId+12QFzdayRbInGFMIUJLfZOkX3EEx/AeMTE1q6nW5TZ46tDiNKoekb4Cfn+MyuU0BaydIqzuDFAgHu3SBRwCjjnsM9MDfN2lin64uUTjoGLSO5kvlB8ih1jIegkw2rXjKBZgbAWyYuFhVHSfv7c5qjXji/AgfXVxoIhu6WmDSlYxJxrw/ua0I03EVpu1mclPEgH4Z6L6Ob7Sm4ig1vXX/v/4IypDyMO/okbsgQ/sQS05LLkVY2+EPoijm+MNToWaYuFEkK1RJJ1lIxKJrUMjeN6DXRrsLKlAa/6lkNM5a3mwqsLE3vm6LiqJMJEpNo3EY60Aw78QHR5V7nJdfZR0eYlQwGHqLz2YIXq/lq0fAS40OkfByLHoO+xskfuKCWJX9wEci/WLykHgeM3Sq6VtNFjPP0WYbG5lR0hljvoIbTp1DvOWyxjkJMSBhpOJdoHuEiL1kVL0bzX9yJA505etCfAM=) 2026-03-08 00:21:39.091701 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFBgpVgEp8oBjSiEq9SfLqNudOJWTZ3YCsEBrYvSawR9) 2026-03-08 00:21:39.091710 | orchestrator | 2026-03-08 00:21:39.091718 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-08 00:21:39.091727 | orchestrator | Sunday 08 March 2026 00:21:36 +0000 (0:00:01.020) 0:00:20.944 ********** 2026-03-08 00:21:39.091736 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDlnvZW9FvQ4TvgLW+3szoNrkref4vAj7ESMwyadQrWCIPcYYiKrwT0fPPhyz4WaF29Jx3x+IVSSQne4heWae0smOt0d1gdCGuN0x/ebxK/gT/30x5VIam9Q2/1EoyoEcIGWHiB+/U4qoR6WuRz108Iqt1JvATuUOKcJlAz35fwKuw0yulkdZhm6E2qI+bnjYY5nVJcO0Pky0wOlPVfQDTJuAwZPhnFU0Jl0TGcywoQPDpmiyejCiZEPKNzGF1kMkVkPUT1+d3slb+7bfLb7hLOnBN4TH2JFjALf9mZhVJ3w3A+xA6yj/WSbVE6vmSeTiF58O9ReFT/eur+FjBNqTVECckL7609undKuI+SH5rOHDz191WnJ/1ZRvcK8zoJrp7OJozPgYxLydHNsPi4fkbbZip72pLHv2Woqwt332zPeJ2nIBPew2m9N7kdbSVC3iLGGyIMOzlLQmBUgs7sFEUY8arvwizfrEbb94Uhro6qwhRQhYujwzON2JbAuPIZ6Hc=) 2026-03-08 00:21:39.091745 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEdnNw7u4S++JM+o9xQbDSAxPzLwVJMNTtOF2g4B6pe2vZQPfHOsanXqZTfznteTdndyOoRJKF/31SJodrRvQEI=) 2026-03-08 00:21:39.091754 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAZ+zvGlnD5QLVoiX41/mT6+NC4KNCdZuzQgIfkGC6x/) 2026-03-08 00:21:39.091762 | orchestrator | 2026-03-08 00:21:39.091771 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-08 00:21:39.091779 | orchestrator | Sunday 08 March 2026 00:21:38 +0000 (0:00:01.049) 0:00:21.994 ********** 2026-03-08 00:21:39.091799 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDU0k2BgothrjUmjN3uu5suZfBBViGy+TKb/p7aUUaI3xS6zGHFigKo+an1IqCRwZ8J7Wsv27HoXr/KE4Nj1kivhNnZZCrs92BoQwVmhQn+ot8lzR8ZSF6IBz185zvtqpfVPjHiDtHeVylOdAvOUv2w6o6sw09LGOBZ0U09n6Hk7uzEVsL2cJdJGFrVRhAVtN4z9f8XfgILkEyFj+wTPwMZiadIaOhPEvWWrNgMAj2ZuPbCd+ilAQnIzCUbfA05cFbPg8BfQLQ6Z18u6SKbiLFSWihKhl1S7cTwu3a4NmKiHtNYxDxvFxZ43QKbw6UY7JQLGbBQ3g3oCaICfAN7t/I/xa4VlfkWZaHuEm4CLyrNUTkoWQFDpEe7pXz6QaN2MaMM9DN+bhMsKQjFjQ8ZFjU7JJTHV4ZSpzU5kDU7i3SCq46/fjxERXuI34e6cU3I3fDccBsCkd6e4TexCtzGG2GRCiz3DLUgWTn1fgYuSoQWj4rkNmm1c+psY5fdxRwmnUE=) 2026-03-08 00:21:43.155110 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDwT2CgOGxbpb5cWE0F5RtiYwHUHia5FtiVpj5E5D4TjkKDk7xL9fLUZB6rj8JOQnC3gDVZL7Bm10WAwbGKjW1g=) 2026-03-08 00:21:43.155214 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEdpY4ruxWeoOKbek+MFZS9S3btj91G85o6fIhQ326JZ) 2026-03-08 00:21:43.155231 | orchestrator | 2026-03-08 00:21:43.155243 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-08 00:21:43.155283 | orchestrator | Sunday 08 March 2026 00:21:39 +0000 (0:00:01.033) 0:00:23.027 ********** 2026-03-08 00:21:43.155297 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZVO9mqI2PomnxmuRzOsSZrvHiXhpPedKz6iXGM45WB8OIsFfBCrwpX/DqizfcMwr88qiP4fNHR5Oq7tWic8++nOft1nzEnCH9afzuRIlk4aoTj+COo7tSjFFoRwgWwnfgnGGo9GmaifgjvFoB4oXLIBmV6oRuXUFj9UA2aBl8WenqG8tvLXftG7C8NuYeoc2Bp+tuUA1rdFO91bqiRZT1l++OBYBZMYGVBJEr1buW3wNzR8TmiJ54eEmQ2/sGcl8Sv3Ix8TRJfM+DnKli5Tl55gHdaYsMPbhMGyvbw/2OaStA11FjjOnTePfiJvlj5BVp2FyCsr9dM1DtgleSQ3YrZ1nm5gwahdhfhpK8TAMUeAjkHldQQxbI3XQjEe6yihFmuoztvSBZ/V5cDvjJY0m2sS/hwqh/02q7spoRzQKQLfnSx5odLoEYKjyR8MIDQoaNFAeKAtkExK9HjldeEOQ0ryGfYN6Vs8RNuCEyfw4pTnGEcOJ8GZEcsR1e9FdBOqU=) 2026-03-08 00:21:43.155312 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFo6Os4GH5QXCYB2+wJnaorwAwsY8QcCk+AFvojEE6XM1VlOBcbpSTB7KQ/rX0s02jEwp7knzKyYd5NfzCa1iK4=) 2026-03-08 00:21:43.155323 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKVKop9D8ZMbe2x0utE8g0pVte26gtkHp0Quw+UFJN8/) 2026-03-08 00:21:43.155334 | orchestrator | 2026-03-08 00:21:43.155345 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-08 00:21:43.155356 | orchestrator | Sunday 08 March 2026 00:21:40 +0000 (0:00:01.019) 0:00:24.047 ********** 2026-03-08 00:21:43.155367 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC02NoO4H1QqFigFlEV94T5P/EMF82OhdBVE9mNsggMvgKTFinUudwQvzHfSl00C2+HN48ind+IN3R3Uwj3Ac2Ybu6E/Ocm9s0DQdHQIGUIRfG9Owm0spj9TQ+fYFIxv1stZAWZqXLrx9zRbvJkOWcPPxo+GXiFYcN3AROYtnS2ZufBhCdbkGt9aPwtsnYWn2243JrpBfQY/TGCZrW+UsNFMkxS/ro9/k7f2aTRHcw/Nif+9yNc2ErVexUN4QgkcMMpG4HEWBU69HwisTbig3zC9EDAbQstrhbOEmJTAgb033x6HUDcYJq9qLCxc4Lgy7zHn9KNsklpNl9njYyLdXT6qeksnWdQFOEKtfkPhRlIx4PG2nWjzbThQP+/INl4rpb+zAnIYvl6Is2dBhPn9OCGAeccyXFe/CC79U+pctRO21HTqJm9YmQ+k1NIOoiTt4MZyqdSPwikQ0zsiKaVQE+WglGHumCBDTtTGQUmqFioPCvWjuQTwiR1PiNUEasi9rc=) 2026-03-08 00:21:43.155379 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJd2Bngxn0fghdI794EXIVeyd8QjN94uLIfpqOanWBarWxVpet6LRPQMY5tcN6/MQDk1Tgelth59g/lupRxpekg=) 2026-03-08 00:21:43.155390 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGVcITSutWWz69hIszH6XJSNBT/8zglXvNc1kXSAXIgY) 2026-03-08 00:21:43.155401 | orchestrator | 2026-03-08 00:21:43.155412 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-08 00:21:43.155423 | orchestrator | Sunday 08 March 2026 00:21:41 +0000 (0:00:01.023) 0:00:25.070 ********** 2026-03-08 00:21:43.155434 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHW7ARLmpe6MHWRAYeI4WOfFbRLiY7WwdlqOQ2GZvfxvoBW+hqIHtW/Gwa48Hchc3HqyUbQBbUT3zqTT34NHt5A=) 2026-03-08 00:21:43.155462 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCukxM9udqiMD0CcbDHF1JxICPqgPNsWegbuV+YLOwoVKmGierB441prTTjMWwSjmcWdpg/987UMKm59v5mW/KYjZnI+PqGRo7P63EUdeJqio22u+QPW9d7QLxYQqtlf2aQWPju2ROJgW3to7LELmwV8SWI/kxqyRBaG/1npz7jbQnwrb4E8hqWoev3qNOVecYbopZrL5X3c6Ve9NFWSmbUuutI/eKDvcV+g0dyzo/gpyiww6b6YQf69ttHUBvBrT5cwy16oB+a4oGwuCxpyM1N85Jg8MgY++UJmQzwyBvLy6GWzz8n5D2/z4AZ/DYyjB4aVAimvo3AL9kCXTt7ZUVqc/1sxZF9V+Lbt9U6neAb/Nq1OMx1ax6jHcG7uG89sGHc7LV6UQs8r3JGZX/LvD6ZAwlepO+VP9fDVQE8x0px3Z+vQT4JFMrMfOnucy2ZvGw16MENAyE9OwSj7JMqnVKalRXdNKh3GRa0UTs0LMZlmQZV76aDpg5a+bEkMvUCmM0=) 2026-03-08 00:21:43.155474 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP2RwD3nHL/RhXEvFKYXvmb5HwnnrIfAYGDB8EiCXKJk) 2026-03-08 00:21:43.155485 | orchestrator | 2026-03-08 00:21:43.155496 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-08 00:21:43.155507 | orchestrator | Sunday 08 March 2026 00:21:42 +0000 (0:00:01.042) 0:00:26.113 ********** 2026-03-08 00:21:43.155528 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-08 00:21:43.155540 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-08 00:21:43.155634 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-08 00:21:43.155649 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-08 00:21:43.155660 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-08 00:21:43.155670 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-08 00:21:43.155681 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-08 00:21:43.155692 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:21:43.155703 | orchestrator | 2026-03-08 00:21:43.155714 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-08 00:21:43.155724 | orchestrator | Sunday 08 March 2026 00:21:42 +0000 (0:00:00.175) 0:00:26.289 ********** 2026-03-08 00:21:43.155735 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:21:43.155746 | orchestrator | 2026-03-08 00:21:43.155757 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-08 00:21:43.155774 | orchestrator | Sunday 08 March 2026 00:21:42 +0000 (0:00:00.051) 0:00:26.340 ********** 2026-03-08 00:21:43.155785 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:21:43.155796 | orchestrator | 2026-03-08 00:21:43.155806 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-08 00:21:43.155817 | orchestrator | Sunday 08 March 2026 00:21:42 +0000 (0:00:00.038) 0:00:26.378 ********** 2026-03-08 00:21:43.155828 | orchestrator | changed: [testbed-manager] 2026-03-08 00:21:43.155839 | orchestrator | 2026-03-08 00:21:43.155849 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:21:43.155860 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-08 00:21:43.155872 | orchestrator | 2026-03-08 00:21:43.155883 | orchestrator | 2026-03-08 00:21:43.155894 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:21:43.155904 | orchestrator | Sunday 08 March 2026 00:21:43 +0000 (0:00:00.592) 0:00:26.971 ********** 2026-03-08 00:21:43.155915 | orchestrator | =============================================================================== 2026-03-08 00:21:43.155926 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.90s 2026-03-08 00:21:43.155936 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.25s 2026-03-08 00:21:43.155948 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-03-08 00:21:43.155958 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-03-08 00:21:43.155969 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-08 00:21:43.155979 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-08 00:21:43.155990 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-08 00:21:43.156000 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-08 00:21:43.156010 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-08 00:21:43.156021 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-08 00:21:43.156032 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-08 00:21:43.156042 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-08 00:21:43.156053 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-08 00:21:43.156063 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-08 00:21:43.156074 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-03-08 00:21:43.156092 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-03-08 00:21:43.156103 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.59s 2026-03-08 00:21:43.156113 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2026-03-08 00:21:43.156124 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2026-03-08 00:21:43.156136 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.15s 2026-03-08 00:21:43.348307 | orchestrator | + osism apply squid 2026-03-08 00:21:55.198458 | orchestrator | 2026-03-08 00:21:55 | INFO  | Task e3a80bf7-2b55-42ab-a6c6-17faec45153a (squid) was prepared for execution. 2026-03-08 00:21:55.198641 | orchestrator | 2026-03-08 00:21:55 | INFO  | It takes a moment until task e3a80bf7-2b55-42ab-a6c6-17faec45153a (squid) has been started and output is visible here. 2026-03-08 00:24:03.159424 | orchestrator | 2026-03-08 00:24:03.159580 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-08 00:24:03.159597 | orchestrator | 2026-03-08 00:24:03.159608 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-08 00:24:03.159619 | orchestrator | Sunday 08 March 2026 00:21:59 +0000 (0:00:00.169) 0:00:00.169 ********** 2026-03-08 00:24:03.159630 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-08 00:24:03.159641 | orchestrator | 2026-03-08 00:24:03.159651 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-08 00:24:03.159661 | orchestrator | Sunday 08 March 2026 00:21:59 +0000 (0:00:00.084) 0:00:00.254 ********** 2026-03-08 00:24:03.159671 | orchestrator | ok: [testbed-manager] 2026-03-08 00:24:03.159682 | orchestrator | 2026-03-08 00:24:03.159692 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-08 00:24:03.159703 | orchestrator | Sunday 08 March 2026 00:22:00 +0000 (0:00:01.431) 0:00:01.685 ********** 2026-03-08 00:24:03.159713 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-08 00:24:03.159723 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-08 00:24:03.159733 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-08 00:24:03.159743 | orchestrator | 2026-03-08 00:24:03.159753 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-08 00:24:03.159762 | orchestrator | Sunday 08 March 2026 00:22:01 +0000 (0:00:01.125) 0:00:02.811 ********** 2026-03-08 00:24:03.159772 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-08 00:24:03.159782 | orchestrator | 2026-03-08 00:24:03.159792 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-08 00:24:03.159802 | orchestrator | Sunday 08 March 2026 00:22:02 +0000 (0:00:01.080) 0:00:03.891 ********** 2026-03-08 00:24:03.159812 | orchestrator | ok: [testbed-manager] 2026-03-08 00:24:03.159822 | orchestrator | 2026-03-08 00:24:03.159845 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-08 00:24:03.159856 | orchestrator | Sunday 08 March 2026 00:22:03 +0000 (0:00:00.349) 0:00:04.241 ********** 2026-03-08 00:24:03.159866 | orchestrator | changed: [testbed-manager] 2026-03-08 00:24:03.159887 | orchestrator | 2026-03-08 00:24:03.159897 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-08 00:24:03.159907 | orchestrator | Sunday 08 March 2026 00:22:04 +0000 (0:00:00.877) 0:00:05.118 ********** 2026-03-08 00:24:03.159916 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-08 00:24:03.159927 | orchestrator | ok: [testbed-manager] 2026-03-08 00:24:03.159941 | orchestrator | 2026-03-08 00:24:03.159951 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-08 00:24:03.159961 | orchestrator | Sunday 08 March 2026 00:22:50 +0000 (0:00:45.984) 0:00:51.103 ********** 2026-03-08 00:24:03.159995 | orchestrator | changed: [testbed-manager] 2026-03-08 00:24:03.160006 | orchestrator | 2026-03-08 00:24:03.160016 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-08 00:24:03.160026 | orchestrator | Sunday 08 March 2026 00:23:02 +0000 (0:00:12.004) 0:01:03.108 ********** 2026-03-08 00:24:03.160036 | orchestrator | Pausing for 60 seconds 2026-03-08 00:24:03.160047 | orchestrator | changed: [testbed-manager] 2026-03-08 00:24:03.160057 | orchestrator | 2026-03-08 00:24:03.160067 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-08 00:24:03.160077 | orchestrator | Sunday 08 March 2026 00:24:02 +0000 (0:01:00.079) 0:02:03.187 ********** 2026-03-08 00:24:03.160087 | orchestrator | ok: [testbed-manager] 2026-03-08 00:24:03.160097 | orchestrator | 2026-03-08 00:24:03.160107 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-08 00:24:03.160117 | orchestrator | Sunday 08 March 2026 00:24:02 +0000 (0:00:00.068) 0:02:03.255 ********** 2026-03-08 00:24:03.160126 | orchestrator | changed: [testbed-manager] 2026-03-08 00:24:03.160136 | orchestrator | 2026-03-08 00:24:03.160146 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:24:03.160156 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:24:03.160166 | orchestrator | 2026-03-08 00:24:03.160176 | orchestrator | 2026-03-08 00:24:03.160186 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:24:03.160196 | orchestrator | Sunday 08 March 2026 00:24:02 +0000 (0:00:00.589) 0:02:03.845 ********** 2026-03-08 00:24:03.160206 | orchestrator | =============================================================================== 2026-03-08 00:24:03.160215 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-03-08 00:24:03.160225 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 45.98s 2026-03-08 00:24:03.160235 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.00s 2026-03-08 00:24:03.160261 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.43s 2026-03-08 00:24:03.160272 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.13s 2026-03-08 00:24:03.160281 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.08s 2026-03-08 00:24:03.160291 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.88s 2026-03-08 00:24:03.160301 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.59s 2026-03-08 00:24:03.160311 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2026-03-08 00:24:03.160320 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-03-08 00:24:03.160330 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-03-08 00:24:03.350371 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-08 00:24:03.350463 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-08 00:24:03.394375 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-08 00:24:03.394512 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-03-08 00:24:03.399665 | orchestrator | + set -e 2026-03-08 00:24:03.399753 | orchestrator | + NAMESPACE=kolla/release 2026-03-08 00:24:03.399769 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-08 00:24:03.403317 | orchestrator | ++ semver 9.5.0 9.0.0 2026-03-08 00:24:03.465649 | orchestrator | + [[ 1 -lt 0 ]] 2026-03-08 00:24:03.466064 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-08 00:24:15.350176 | orchestrator | 2026-03-08 00:24:15 | INFO  | Task ec6d9640-3701-4337-98f4-6c982d95f160 (operator) was prepared for execution. 2026-03-08 00:24:15.350266 | orchestrator | 2026-03-08 00:24:15 | INFO  | It takes a moment until task ec6d9640-3701-4337-98f4-6c982d95f160 (operator) has been started and output is visible here. 2026-03-08 00:24:30.273690 | orchestrator | 2026-03-08 00:24:30.273808 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-08 00:24:30.273847 | orchestrator | 2026-03-08 00:24:30.273857 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-08 00:24:30.273866 | orchestrator | Sunday 08 March 2026 00:24:19 +0000 (0:00:00.136) 0:00:00.136 ********** 2026-03-08 00:24:30.273876 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:24:30.273885 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:24:30.273894 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:24:30.273903 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:24:30.273911 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:24:30.273920 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:24:30.273928 | orchestrator | 2026-03-08 00:24:30.273937 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-08 00:24:30.273946 | orchestrator | Sunday 08 March 2026 00:24:22 +0000 (0:00:03.236) 0:00:03.373 ********** 2026-03-08 00:24:30.273955 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:24:30.273964 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:24:30.273972 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:24:30.273994 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:24:30.274003 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:24:30.274011 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:24:30.274149 | orchestrator | 2026-03-08 00:24:30.274159 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-08 00:24:30.274168 | orchestrator | 2026-03-08 00:24:30.274176 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-08 00:24:30.274185 | orchestrator | Sunday 08 March 2026 00:24:23 +0000 (0:00:00.626) 0:00:03.999 ********** 2026-03-08 00:24:30.274194 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:24:30.274205 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:24:30.274215 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:24:30.274225 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:24:30.274235 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:24:30.274245 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:24:30.274255 | orchestrator | 2026-03-08 00:24:30.274266 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-08 00:24:30.274276 | orchestrator | Sunday 08 March 2026 00:24:23 +0000 (0:00:00.124) 0:00:04.123 ********** 2026-03-08 00:24:30.274286 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:24:30.274296 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:24:30.274306 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:24:30.274316 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:24:30.274326 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:24:30.274336 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:24:30.274345 | orchestrator | 2026-03-08 00:24:30.274354 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-08 00:24:30.274363 | orchestrator | Sunday 08 March 2026 00:24:23 +0000 (0:00:00.146) 0:00:04.270 ********** 2026-03-08 00:24:30.274372 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:24:30.274381 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:24:30.274390 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:24:30.274399 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:24:30.274407 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:24:30.274416 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:24:30.274446 | orchestrator | 2026-03-08 00:24:30.274462 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-08 00:24:30.274477 | orchestrator | Sunday 08 March 2026 00:24:24 +0000 (0:00:00.511) 0:00:04.781 ********** 2026-03-08 00:24:30.274492 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:24:30.274507 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:24:30.274522 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:24:30.274534 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:24:30.274543 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:24:30.274551 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:24:30.274560 | orchestrator | 2026-03-08 00:24:30.274568 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-08 00:24:30.274587 | orchestrator | Sunday 08 March 2026 00:24:24 +0000 (0:00:00.674) 0:00:05.455 ********** 2026-03-08 00:24:30.274596 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-08 00:24:30.274605 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-08 00:24:30.274613 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-08 00:24:30.274622 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-08 00:24:30.274630 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-08 00:24:30.274638 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-08 00:24:30.274647 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-08 00:24:30.274655 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-08 00:24:30.274664 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-08 00:24:30.274672 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-08 00:24:30.274681 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-08 00:24:30.274689 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-08 00:24:30.274698 | orchestrator | 2026-03-08 00:24:30.274706 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-08 00:24:30.274715 | orchestrator | Sunday 08 March 2026 00:24:25 +0000 (0:00:01.127) 0:00:06.582 ********** 2026-03-08 00:24:30.274723 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:24:30.274732 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:24:30.274740 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:24:30.274749 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:24:30.274757 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:24:30.274765 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:24:30.274774 | orchestrator | 2026-03-08 00:24:30.274783 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-08 00:24:30.274792 | orchestrator | Sunday 08 March 2026 00:24:26 +0000 (0:00:01.089) 0:00:07.672 ********** 2026-03-08 00:24:30.274801 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-08 00:24:30.274810 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-08 00:24:30.274818 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-08 00:24:30.274827 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-08 00:24:30.274852 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-08 00:24:30.274862 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-08 00:24:30.274870 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-08 00:24:30.274878 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-08 00:24:30.274887 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-08 00:24:30.274895 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-08 00:24:30.274904 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-08 00:24:30.274912 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-08 00:24:30.274920 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-08 00:24:30.274929 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-08 00:24:30.274937 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-08 00:24:30.274946 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-08 00:24:30.274955 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-08 00:24:30.274963 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-08 00:24:30.274972 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-08 00:24:30.274980 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-08 00:24:30.274988 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-08 00:24:30.275003 | orchestrator | 2026-03-08 00:24:30.275011 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-08 00:24:30.275021 | orchestrator | Sunday 08 March 2026 00:24:28 +0000 (0:00:01.218) 0:00:08.890 ********** 2026-03-08 00:24:30.275029 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:24:30.275038 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:24:30.275046 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:24:30.275055 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:24:30.275063 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:24:30.275071 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:24:30.275080 | orchestrator | 2026-03-08 00:24:30.275088 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-08 00:24:30.275097 | orchestrator | Sunday 08 March 2026 00:24:28 +0000 (0:00:00.171) 0:00:09.062 ********** 2026-03-08 00:24:30.275106 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:24:30.275114 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:24:30.275122 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:24:30.275131 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:24:30.275139 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:24:30.275148 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:24:30.275156 | orchestrator | 2026-03-08 00:24:30.275165 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-08 00:24:30.275173 | orchestrator | Sunday 08 March 2026 00:24:28 +0000 (0:00:00.185) 0:00:09.247 ********** 2026-03-08 00:24:30.275182 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:24:30.275190 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:24:30.275199 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:24:30.275207 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:24:30.275215 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:24:30.275224 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:24:30.275232 | orchestrator | 2026-03-08 00:24:30.275241 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-08 00:24:30.275249 | orchestrator | Sunday 08 March 2026 00:24:29 +0000 (0:00:00.594) 0:00:09.842 ********** 2026-03-08 00:24:30.275258 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:24:30.275266 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:24:30.275275 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:24:30.275283 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:24:30.275291 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:24:30.275300 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:24:30.275308 | orchestrator | 2026-03-08 00:24:30.275317 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-08 00:24:30.275325 | orchestrator | Sunday 08 March 2026 00:24:29 +0000 (0:00:00.177) 0:00:10.020 ********** 2026-03-08 00:24:30.275343 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-08 00:24:30.275352 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:24:30.275361 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-08 00:24:30.275370 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-08 00:24:30.275378 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:24:30.275387 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:24:30.275395 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-08 00:24:30.275404 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-08 00:24:30.275412 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:24:30.275421 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:24:30.275450 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-08 00:24:30.275460 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:24:30.275468 | orchestrator | 2026-03-08 00:24:30.275477 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-08 00:24:30.275485 | orchestrator | Sunday 08 March 2026 00:24:29 +0000 (0:00:00.694) 0:00:10.714 ********** 2026-03-08 00:24:30.275494 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:24:30.275509 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:24:30.275518 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:24:30.275527 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:24:30.275535 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:24:30.275543 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:24:30.275552 | orchestrator | 2026-03-08 00:24:30.275561 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-08 00:24:30.275569 | orchestrator | Sunday 08 March 2026 00:24:30 +0000 (0:00:00.162) 0:00:10.877 ********** 2026-03-08 00:24:30.275578 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:24:30.275586 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:24:30.275595 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:24:30.275603 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:24:30.275618 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:24:31.496043 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:24:31.497509 | orchestrator | 2026-03-08 00:24:31.497588 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-08 00:24:31.497607 | orchestrator | Sunday 08 March 2026 00:24:30 +0000 (0:00:00.169) 0:00:11.047 ********** 2026-03-08 00:24:31.497620 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:24:31.497631 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:24:31.497642 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:24:31.497653 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:24:31.497664 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:24:31.497675 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:24:31.497685 | orchestrator | 2026-03-08 00:24:31.497696 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-08 00:24:31.497707 | orchestrator | Sunday 08 March 2026 00:24:30 +0000 (0:00:00.147) 0:00:11.194 ********** 2026-03-08 00:24:31.497718 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:24:31.497729 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:24:31.497758 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:24:31.497770 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:24:31.497780 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:24:31.497791 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:24:31.497802 | orchestrator | 2026-03-08 00:24:31.497812 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-08 00:24:31.497829 | orchestrator | Sunday 08 March 2026 00:24:31 +0000 (0:00:00.620) 0:00:11.815 ********** 2026-03-08 00:24:31.497848 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:24:31.497866 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:24:31.497883 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:24:31.497902 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:24:31.497920 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:24:31.497937 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:24:31.497956 | orchestrator | 2026-03-08 00:24:31.497975 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:24:31.497995 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-08 00:24:31.498090 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-08 00:24:31.498119 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-08 00:24:31.498138 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-08 00:24:31.498158 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-08 00:24:31.498172 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-08 00:24:31.498214 | orchestrator | 2026-03-08 00:24:31.498233 | orchestrator | 2026-03-08 00:24:31.498246 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:24:31.498257 | orchestrator | Sunday 08 March 2026 00:24:31 +0000 (0:00:00.235) 0:00:12.051 ********** 2026-03-08 00:24:31.498267 | orchestrator | =============================================================================== 2026-03-08 00:24:31.498278 | orchestrator | Gathering Facts --------------------------------------------------------- 3.24s 2026-03-08 00:24:31.498289 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.22s 2026-03-08 00:24:31.498300 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.13s 2026-03-08 00:24:31.498311 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.09s 2026-03-08 00:24:31.498322 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.69s 2026-03-08 00:24:31.498332 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.67s 2026-03-08 00:24:31.498343 | orchestrator | Do not require tty for all users ---------------------------------------- 0.63s 2026-03-08 00:24:31.498354 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.62s 2026-03-08 00:24:31.498365 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.59s 2026-03-08 00:24:31.498375 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.51s 2026-03-08 00:24:31.498386 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.24s 2026-03-08 00:24:31.498397 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.19s 2026-03-08 00:24:31.498407 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2026-03-08 00:24:31.498418 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.17s 2026-03-08 00:24:31.498460 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.17s 2026-03-08 00:24:31.498477 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2026-03-08 00:24:31.498488 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2026-03-08 00:24:31.498499 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2026-03-08 00:24:31.498510 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.12s 2026-03-08 00:24:31.764558 | orchestrator | + osism apply --environment custom facts 2026-03-08 00:24:33.619654 | orchestrator | 2026-03-08 00:24:33 | INFO  | Trying to run play facts in environment custom 2026-03-08 00:24:43.771592 | orchestrator | 2026-03-08 00:24:43 | INFO  | Task 50f71e05-42f2-48eb-88f5-d1be258ac44f (facts) was prepared for execution. 2026-03-08 00:24:43.772273 | orchestrator | 2026-03-08 00:24:43 | INFO  | It takes a moment until task 50f71e05-42f2-48eb-88f5-d1be258ac44f (facts) has been started and output is visible here. 2026-03-08 00:25:25.663814 | orchestrator | 2026-03-08 00:25:25.663942 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-08 00:25:25.663967 | orchestrator | 2026-03-08 00:25:25.663985 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-08 00:25:25.664001 | orchestrator | Sunday 08 March 2026 00:24:47 +0000 (0:00:00.063) 0:00:00.063 ********** 2026-03-08 00:25:25.664017 | orchestrator | ok: [testbed-manager] 2026-03-08 00:25:25.664030 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:25:25.664046 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:25:25.664061 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:25:25.664076 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:25:25.664091 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:25:25.664106 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:25:25.664122 | orchestrator | 2026-03-08 00:25:25.664139 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-08 00:25:25.664184 | orchestrator | Sunday 08 March 2026 00:24:49 +0000 (0:00:01.214) 0:00:01.277 ********** 2026-03-08 00:25:25.664199 | orchestrator | ok: [testbed-manager] 2026-03-08 00:25:25.664219 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:25:25.664239 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:25:25.664253 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:25:25.664266 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:25:25.664279 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:25:25.664293 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:25:25.664305 | orchestrator | 2026-03-08 00:25:25.664318 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-08 00:25:25.664332 | orchestrator | 2026-03-08 00:25:25.664350 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-08 00:25:25.664367 | orchestrator | Sunday 08 March 2026 00:24:50 +0000 (0:00:00.998) 0:00:02.275 ********** 2026-03-08 00:25:25.664383 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:25:25.664464 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:25:25.664484 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:25:25.664503 | orchestrator | 2026-03-08 00:25:25.664521 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-08 00:25:25.664540 | orchestrator | Sunday 08 March 2026 00:24:50 +0000 (0:00:00.093) 0:00:02.369 ********** 2026-03-08 00:25:25.664559 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:25:25.664577 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:25:25.664596 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:25:25.664615 | orchestrator | 2026-03-08 00:25:25.664634 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-08 00:25:25.664653 | orchestrator | Sunday 08 March 2026 00:24:50 +0000 (0:00:00.180) 0:00:02.550 ********** 2026-03-08 00:25:25.664671 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:25:25.664689 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:25:25.664707 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:25:25.664725 | orchestrator | 2026-03-08 00:25:25.664744 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-08 00:25:25.664763 | orchestrator | Sunday 08 March 2026 00:24:50 +0000 (0:00:00.189) 0:00:02.739 ********** 2026-03-08 00:25:25.664784 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:25:25.664797 | orchestrator | 2026-03-08 00:25:25.664810 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-08 00:25:25.664828 | orchestrator | Sunday 08 March 2026 00:24:50 +0000 (0:00:00.126) 0:00:02.866 ********** 2026-03-08 00:25:25.664847 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:25:25.664864 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:25:25.664881 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:25:25.664900 | orchestrator | 2026-03-08 00:25:25.664918 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-08 00:25:25.664937 | orchestrator | Sunday 08 March 2026 00:24:50 +0000 (0:00:00.370) 0:00:03.237 ********** 2026-03-08 00:25:25.664956 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:25:25.664975 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:25:25.664995 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:25:25.665013 | orchestrator | 2026-03-08 00:25:25.665030 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-08 00:25:25.665049 | orchestrator | Sunday 08 March 2026 00:24:51 +0000 (0:00:00.140) 0:00:03.378 ********** 2026-03-08 00:25:25.665067 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:25:25.665086 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:25:25.665104 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:25:25.665124 | orchestrator | 2026-03-08 00:25:25.665142 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-08 00:25:25.665160 | orchestrator | Sunday 08 March 2026 00:24:52 +0000 (0:00:00.914) 0:00:04.292 ********** 2026-03-08 00:25:25.665201 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:25:25.665222 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:25:25.665241 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:25:25.665259 | orchestrator | 2026-03-08 00:25:25.665277 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-08 00:25:25.665296 | orchestrator | Sunday 08 March 2026 00:24:52 +0000 (0:00:00.662) 0:00:04.955 ********** 2026-03-08 00:25:25.665314 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:25:25.665328 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:25:25.665339 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:25:25.665349 | orchestrator | 2026-03-08 00:25:25.665360 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-08 00:25:25.665448 | orchestrator | Sunday 08 March 2026 00:24:53 +0000 (0:00:01.092) 0:00:06.048 ********** 2026-03-08 00:25:25.665463 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:25:25.665476 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:25:25.665494 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:25:25.665512 | orchestrator | 2026-03-08 00:25:25.665531 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-08 00:25:25.665548 | orchestrator | Sunday 08 March 2026 00:25:09 +0000 (0:00:15.592) 0:00:21.640 ********** 2026-03-08 00:25:25.665560 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:25:25.665571 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:25:25.665582 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:25:25.665593 | orchestrator | 2026-03-08 00:25:25.665603 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-08 00:25:25.665639 | orchestrator | Sunday 08 March 2026 00:25:09 +0000 (0:00:00.087) 0:00:21.728 ********** 2026-03-08 00:25:25.665651 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:25:25.665662 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:25:25.665673 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:25:25.665683 | orchestrator | 2026-03-08 00:25:25.665700 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-08 00:25:25.665711 | orchestrator | Sunday 08 March 2026 00:25:17 +0000 (0:00:07.847) 0:00:29.575 ********** 2026-03-08 00:25:25.665722 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:25:25.665733 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:25:25.665744 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:25:25.665754 | orchestrator | 2026-03-08 00:25:25.665765 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-08 00:25:25.665776 | orchestrator | Sunday 08 March 2026 00:25:17 +0000 (0:00:00.446) 0:00:30.022 ********** 2026-03-08 00:25:25.665787 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-08 00:25:25.665798 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-08 00:25:25.665809 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-08 00:25:25.665820 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-08 00:25:25.665830 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-08 00:25:25.665841 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-08 00:25:25.665852 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-08 00:25:25.665862 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-08 00:25:25.665873 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-08 00:25:25.665884 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-08 00:25:25.665894 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-08 00:25:25.665905 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-08 00:25:25.665915 | orchestrator | 2026-03-08 00:25:25.665926 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-08 00:25:25.665937 | orchestrator | Sunday 08 March 2026 00:25:21 +0000 (0:00:03.250) 0:00:33.272 ********** 2026-03-08 00:25:25.665958 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:25:25.665969 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:25:25.665980 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:25:25.665991 | orchestrator | 2026-03-08 00:25:25.666001 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-08 00:25:25.666136 | orchestrator | 2026-03-08 00:25:25.666154 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-08 00:25:25.666165 | orchestrator | Sunday 08 March 2026 00:25:22 +0000 (0:00:01.108) 0:00:34.381 ********** 2026-03-08 00:25:25.666176 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:25:25.666187 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:25:25.666198 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:25:25.666209 | orchestrator | ok: [testbed-manager] 2026-03-08 00:25:25.666220 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:25:25.666231 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:25:25.666242 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:25:25.666253 | orchestrator | 2026-03-08 00:25:25.666264 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:25:25.666276 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:25:25.666288 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:25:25.666301 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:25:25.666312 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:25:25.666323 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:25:25.666334 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:25:25.666345 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:25:25.666356 | orchestrator | 2026-03-08 00:25:25.666367 | orchestrator | 2026-03-08 00:25:25.666384 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:25:25.666426 | orchestrator | Sunday 08 March 2026 00:25:25 +0000 (0:00:03.518) 0:00:37.900 ********** 2026-03-08 00:25:25.666446 | orchestrator | =============================================================================== 2026-03-08 00:25:25.666465 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.59s 2026-03-08 00:25:25.666483 | orchestrator | Install required packages (Debian) -------------------------------------- 7.85s 2026-03-08 00:25:25.666495 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.52s 2026-03-08 00:25:25.666506 | orchestrator | Copy fact files --------------------------------------------------------- 3.25s 2026-03-08 00:25:25.666517 | orchestrator | Create custom facts directory ------------------------------------------- 1.21s 2026-03-08 00:25:25.666527 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.11s 2026-03-08 00:25:25.666550 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.09s 2026-03-08 00:25:25.869586 | orchestrator | Copy fact file ---------------------------------------------------------- 1.00s 2026-03-08 00:25:25.870642 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.91s 2026-03-08 00:25:25.870758 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.66s 2026-03-08 00:25:25.870778 | orchestrator | Create custom facts directory ------------------------------------------- 0.45s 2026-03-08 00:25:25.870792 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.37s 2026-03-08 00:25:25.870853 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.19s 2026-03-08 00:25:25.870865 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.18s 2026-03-08 00:25:25.870875 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2026-03-08 00:25:25.870889 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2026-03-08 00:25:25.870903 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.09s 2026-03-08 00:25:25.870916 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2026-03-08 00:25:26.142212 | orchestrator | + osism apply bootstrap 2026-03-08 00:25:38.264031 | orchestrator | 2026-03-08 00:25:38 | INFO  | Task 5e79566e-6c95-48d8-9f01-ef5fe0d7f3a4 (bootstrap) was prepared for execution. 2026-03-08 00:25:38.264153 | orchestrator | 2026-03-08 00:25:38 | INFO  | It takes a moment until task 5e79566e-6c95-48d8-9f01-ef5fe0d7f3a4 (bootstrap) has been started and output is visible here. 2026-03-08 00:25:54.533645 | orchestrator | 2026-03-08 00:25:54.533776 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-08 00:25:54.533793 | orchestrator | 2026-03-08 00:25:54.533805 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-08 00:25:54.533818 | orchestrator | Sunday 08 March 2026 00:25:42 +0000 (0:00:00.151) 0:00:00.151 ********** 2026-03-08 00:25:54.533838 | orchestrator | ok: [testbed-manager] 2026-03-08 00:25:54.533853 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:25:54.533864 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:25:54.533874 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:25:54.533885 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:25:54.533896 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:25:54.533906 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:25:54.533917 | orchestrator | 2026-03-08 00:25:54.533929 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-08 00:25:54.533940 | orchestrator | 2026-03-08 00:25:54.533951 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-08 00:25:54.533962 | orchestrator | Sunday 08 March 2026 00:25:43 +0000 (0:00:00.244) 0:00:00.395 ********** 2026-03-08 00:25:54.533972 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:25:54.533983 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:25:54.533994 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:25:54.534004 | orchestrator | ok: [testbed-manager] 2026-03-08 00:25:54.534082 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:25:54.534098 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:25:54.534108 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:25:54.534119 | orchestrator | 2026-03-08 00:25:54.534131 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-08 00:25:54.534144 | orchestrator | 2026-03-08 00:25:54.534159 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-08 00:25:54.534178 | orchestrator | Sunday 08 March 2026 00:25:46 +0000 (0:00:03.751) 0:00:04.147 ********** 2026-03-08 00:25:54.534196 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-08 00:25:54.534216 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-08 00:25:54.534235 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-08 00:25:54.534253 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-08 00:25:54.534271 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-08 00:25:54.534288 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:25:54.534306 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-08 00:25:54.534338 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:25:54.534357 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-08 00:25:54.534375 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-08 00:25:54.534457 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:25:54.534478 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-08 00:25:54.534496 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-08 00:25:54.534514 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-08 00:25:54.534526 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-08 00:25:54.534536 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-08 00:25:54.534553 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-08 00:25:54.534568 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-08 00:25:54.534579 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:25:54.534590 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-08 00:25:54.534601 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-08 00:25:54.534617 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-08 00:25:54.534631 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-08 00:25:54.534646 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-08 00:25:54.534663 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:25:54.534681 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-08 00:25:54.534700 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-08 00:25:54.534712 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-08 00:25:54.534722 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-08 00:25:54.534733 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-08 00:25:54.534744 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-08 00:25:54.534755 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-08 00:25:54.534765 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-08 00:25:54.534776 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-08 00:25:54.534786 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-08 00:25:54.534796 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:25:54.534807 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-08 00:25:54.534818 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-08 00:25:54.534828 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-08 00:25:54.534839 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-08 00:25:54.534849 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-08 00:25:54.534860 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-08 00:25:54.534870 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:25:54.534881 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-08 00:25:54.534891 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-08 00:25:54.534902 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-08 00:25:54.534912 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:25:54.534945 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-08 00:25:54.534957 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-08 00:25:54.534967 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-08 00:25:54.534978 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-08 00:25:54.534989 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:25:54.534999 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-08 00:25:54.535010 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-08 00:25:54.535020 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-08 00:25:54.535048 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:25:54.535069 | orchestrator | 2026-03-08 00:25:54.535080 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-08 00:25:54.535091 | orchestrator | 2026-03-08 00:25:54.535102 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-08 00:25:54.535113 | orchestrator | Sunday 08 March 2026 00:25:47 +0000 (0:00:00.457) 0:00:04.604 ********** 2026-03-08 00:25:54.535124 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:25:54.535135 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:25:54.535146 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:25:54.535156 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:25:54.535167 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:25:54.535178 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:25:54.535189 | orchestrator | ok: [testbed-manager] 2026-03-08 00:25:54.535200 | orchestrator | 2026-03-08 00:25:54.535210 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-08 00:25:54.535221 | orchestrator | Sunday 08 March 2026 00:25:48 +0000 (0:00:01.238) 0:00:05.842 ********** 2026-03-08 00:25:54.535232 | orchestrator | ok: [testbed-manager] 2026-03-08 00:25:54.535243 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:25:54.535254 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:25:54.535264 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:25:54.535275 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:25:54.535285 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:25:54.535296 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:25:54.535307 | orchestrator | 2026-03-08 00:25:54.535317 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-08 00:25:54.535328 | orchestrator | Sunday 08 March 2026 00:25:49 +0000 (0:00:01.295) 0:00:07.138 ********** 2026-03-08 00:25:54.535340 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:25:54.535353 | orchestrator | 2026-03-08 00:25:54.535365 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-08 00:25:54.535376 | orchestrator | Sunday 08 March 2026 00:25:50 +0000 (0:00:00.307) 0:00:07.446 ********** 2026-03-08 00:25:54.535408 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:25:54.535419 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:25:54.535430 | orchestrator | changed: [testbed-manager] 2026-03-08 00:25:54.535440 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:25:54.535451 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:25:54.535462 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:25:54.535472 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:25:54.535483 | orchestrator | 2026-03-08 00:25:54.535494 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-08 00:25:54.535504 | orchestrator | Sunday 08 March 2026 00:25:52 +0000 (0:00:02.001) 0:00:09.447 ********** 2026-03-08 00:25:54.535515 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:25:54.535527 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:25:54.535540 | orchestrator | 2026-03-08 00:25:54.535551 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-08 00:25:54.535561 | orchestrator | Sunday 08 March 2026 00:25:52 +0000 (0:00:00.264) 0:00:09.711 ********** 2026-03-08 00:25:54.535572 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:25:54.535583 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:25:54.535594 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:25:54.535604 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:25:54.535615 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:25:54.535626 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:25:54.535636 | orchestrator | 2026-03-08 00:25:54.535652 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-08 00:25:54.535670 | orchestrator | Sunday 08 March 2026 00:25:53 +0000 (0:00:01.046) 0:00:10.758 ********** 2026-03-08 00:25:54.535681 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:25:54.535692 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:25:54.535703 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:25:54.535714 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:25:54.535724 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:25:54.535735 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:25:54.535745 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:25:54.535756 | orchestrator | 2026-03-08 00:25:54.535768 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-08 00:25:54.535787 | orchestrator | Sunday 08 March 2026 00:25:53 +0000 (0:00:00.578) 0:00:11.337 ********** 2026-03-08 00:25:54.535799 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:25:54.535809 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:25:54.535820 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:25:54.535831 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:25:54.535843 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:25:54.535860 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:25:54.535871 | orchestrator | ok: [testbed-manager] 2026-03-08 00:25:54.535882 | orchestrator | 2026-03-08 00:25:54.535892 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-08 00:25:54.535904 | orchestrator | Sunday 08 March 2026 00:25:54 +0000 (0:00:00.441) 0:00:11.779 ********** 2026-03-08 00:25:54.535914 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:25:54.535925 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:25:54.535942 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:26:07.053820 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:26:07.053930 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:26:07.053941 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:26:07.053949 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:26:07.053955 | orchestrator | 2026-03-08 00:26:07.053964 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-08 00:26:07.053972 | orchestrator | Sunday 08 March 2026 00:25:54 +0000 (0:00:00.226) 0:00:12.005 ********** 2026-03-08 00:26:07.053980 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:26:07.054001 | orchestrator | 2026-03-08 00:26:07.054007 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-08 00:26:07.054093 | orchestrator | Sunday 08 March 2026 00:25:54 +0000 (0:00:00.331) 0:00:12.337 ********** 2026-03-08 00:26:07.054102 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:26:07.054109 | orchestrator | 2026-03-08 00:26:07.054115 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-08 00:26:07.054122 | orchestrator | Sunday 08 March 2026 00:25:55 +0000 (0:00:00.330) 0:00:12.667 ********** 2026-03-08 00:26:07.054129 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:26:07.054136 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:26:07.054142 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:26:07.054148 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:26:07.054154 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:26:07.054161 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:26:07.054167 | orchestrator | ok: [testbed-manager] 2026-03-08 00:26:07.054173 | orchestrator | 2026-03-08 00:26:07.054180 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-08 00:26:07.054186 | orchestrator | Sunday 08 March 2026 00:25:56 +0000 (0:00:01.394) 0:00:14.062 ********** 2026-03-08 00:26:07.054192 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:26:07.054218 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:26:07.054225 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:26:07.054231 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:26:07.054237 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:26:07.054244 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:26:07.054250 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:26:07.054256 | orchestrator | 2026-03-08 00:26:07.054262 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-08 00:26:07.054268 | orchestrator | Sunday 08 March 2026 00:25:56 +0000 (0:00:00.238) 0:00:14.300 ********** 2026-03-08 00:26:07.054277 | orchestrator | ok: [testbed-manager] 2026-03-08 00:26:07.054287 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:26:07.054297 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:26:07.054306 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:26:07.054316 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:26:07.054326 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:26:07.054336 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:26:07.054345 | orchestrator | 2026-03-08 00:26:07.054356 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-08 00:26:07.054367 | orchestrator | Sunday 08 March 2026 00:25:57 +0000 (0:00:00.534) 0:00:14.835 ********** 2026-03-08 00:26:07.054393 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:26:07.054404 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:26:07.054414 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:26:07.054425 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:26:07.054436 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:26:07.054447 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:26:07.054459 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:26:07.054470 | orchestrator | 2026-03-08 00:26:07.054481 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-08 00:26:07.054494 | orchestrator | Sunday 08 March 2026 00:25:57 +0000 (0:00:00.336) 0:00:15.171 ********** 2026-03-08 00:26:07.054506 | orchestrator | ok: [testbed-manager] 2026-03-08 00:26:07.054517 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:26:07.054527 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:26:07.054535 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:26:07.054543 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:26:07.054558 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:26:07.054566 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:26:07.054573 | orchestrator | 2026-03-08 00:26:07.054581 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-08 00:26:07.054588 | orchestrator | Sunday 08 March 2026 00:25:58 +0000 (0:00:00.565) 0:00:15.736 ********** 2026-03-08 00:26:07.054595 | orchestrator | ok: [testbed-manager] 2026-03-08 00:26:07.054603 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:26:07.054610 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:26:07.054618 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:26:07.054625 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:26:07.054632 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:26:07.054639 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:26:07.054647 | orchestrator | 2026-03-08 00:26:07.054654 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-08 00:26:07.054662 | orchestrator | Sunday 08 March 2026 00:25:59 +0000 (0:00:01.207) 0:00:16.944 ********** 2026-03-08 00:26:07.054670 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:26:07.054678 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:26:07.054685 | orchestrator | ok: [testbed-manager] 2026-03-08 00:26:07.054692 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:26:07.054700 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:26:07.054706 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:26:07.054713 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:26:07.054719 | orchestrator | 2026-03-08 00:26:07.054725 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-08 00:26:07.054731 | orchestrator | Sunday 08 March 2026 00:26:00 +0000 (0:00:01.059) 0:00:18.003 ********** 2026-03-08 00:26:07.054762 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:26:07.054769 | orchestrator | 2026-03-08 00:26:07.054775 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-08 00:26:07.054782 | orchestrator | Sunday 08 March 2026 00:26:00 +0000 (0:00:00.346) 0:00:18.350 ********** 2026-03-08 00:26:07.054788 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:26:07.054794 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:26:07.054800 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:26:07.054806 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:26:07.054812 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:26:07.054818 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:26:07.054825 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:26:07.054832 | orchestrator | 2026-03-08 00:26:07.054842 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-08 00:26:07.054858 | orchestrator | Sunday 08 March 2026 00:26:02 +0000 (0:00:01.321) 0:00:19.671 ********** 2026-03-08 00:26:07.054870 | orchestrator | ok: [testbed-manager] 2026-03-08 00:26:07.054880 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:26:07.054889 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:26:07.054899 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:26:07.054909 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:26:07.054918 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:26:07.054928 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:26:07.054938 | orchestrator | 2026-03-08 00:26:07.054948 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-08 00:26:07.054959 | orchestrator | Sunday 08 March 2026 00:26:02 +0000 (0:00:00.207) 0:00:19.879 ********** 2026-03-08 00:26:07.054970 | orchestrator | ok: [testbed-manager] 2026-03-08 00:26:07.054980 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:26:07.054991 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:26:07.054999 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:26:07.055005 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:26:07.055011 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:26:07.055017 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:26:07.055024 | orchestrator | 2026-03-08 00:26:07.055030 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-08 00:26:07.055036 | orchestrator | Sunday 08 March 2026 00:26:02 +0000 (0:00:00.230) 0:00:20.109 ********** 2026-03-08 00:26:07.055042 | orchestrator | ok: [testbed-manager] 2026-03-08 00:26:07.055048 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:26:07.055054 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:26:07.055060 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:26:07.055066 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:26:07.055072 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:26:07.055078 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:26:07.055084 | orchestrator | 2026-03-08 00:26:07.055091 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-08 00:26:07.055097 | orchestrator | Sunday 08 March 2026 00:26:02 +0000 (0:00:00.226) 0:00:20.336 ********** 2026-03-08 00:26:07.055104 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:26:07.055112 | orchestrator | 2026-03-08 00:26:07.055118 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-08 00:26:07.055124 | orchestrator | Sunday 08 March 2026 00:26:03 +0000 (0:00:00.318) 0:00:20.655 ********** 2026-03-08 00:26:07.055130 | orchestrator | ok: [testbed-manager] 2026-03-08 00:26:07.055136 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:26:07.055142 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:26:07.055148 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:26:07.055161 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:26:07.055167 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:26:07.055173 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:26:07.055179 | orchestrator | 2026-03-08 00:26:07.055186 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-08 00:26:07.055192 | orchestrator | Sunday 08 March 2026 00:26:03 +0000 (0:00:00.567) 0:00:21.222 ********** 2026-03-08 00:26:07.055198 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:26:07.055204 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:26:07.055210 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:26:07.055217 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:26:07.055223 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:26:07.055229 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:26:07.055235 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:26:07.055241 | orchestrator | 2026-03-08 00:26:07.055248 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-08 00:26:07.055254 | orchestrator | Sunday 08 March 2026 00:26:04 +0000 (0:00:00.263) 0:00:21.485 ********** 2026-03-08 00:26:07.055260 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:26:07.055266 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:26:07.055272 | orchestrator | ok: [testbed-manager] 2026-03-08 00:26:07.055279 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:26:07.055285 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:26:07.055291 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:26:07.055297 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:26:07.055303 | orchestrator | 2026-03-08 00:26:07.055310 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-08 00:26:07.055316 | orchestrator | Sunday 08 March 2026 00:26:05 +0000 (0:00:01.070) 0:00:22.556 ********** 2026-03-08 00:26:07.055322 | orchestrator | ok: [testbed-manager] 2026-03-08 00:26:07.055328 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:26:07.055334 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:26:07.055340 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:26:07.055346 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:26:07.055352 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:26:07.055358 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:26:07.055364 | orchestrator | 2026-03-08 00:26:07.055370 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-08 00:26:07.055419 | orchestrator | Sunday 08 March 2026 00:26:05 +0000 (0:00:00.700) 0:00:23.256 ********** 2026-03-08 00:26:07.055426 | orchestrator | ok: [testbed-manager] 2026-03-08 00:26:07.055432 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:26:07.055438 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:26:07.055452 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:26:07.055466 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:26:48.383392 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:26:48.383498 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:26:48.383508 | orchestrator | 2026-03-08 00:26:48.383515 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-08 00:26:48.383535 | orchestrator | Sunday 08 March 2026 00:26:07 +0000 (0:00:01.171) 0:00:24.427 ********** 2026-03-08 00:26:48.383541 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:26:48.383548 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:26:48.383554 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:26:48.383560 | orchestrator | changed: [testbed-manager] 2026-03-08 00:26:48.383566 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:26:48.383571 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:26:48.383577 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:26:48.383583 | orchestrator | 2026-03-08 00:26:48.383589 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-08 00:26:48.383595 | orchestrator | Sunday 08 March 2026 00:26:23 +0000 (0:00:16.185) 0:00:40.613 ********** 2026-03-08 00:26:48.383600 | orchestrator | ok: [testbed-manager] 2026-03-08 00:26:48.383606 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:26:48.383612 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:26:48.383638 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:26:48.383643 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:26:48.383649 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:26:48.383655 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:26:48.383660 | orchestrator | 2026-03-08 00:26:48.383665 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-08 00:26:48.383672 | orchestrator | Sunday 08 March 2026 00:26:23 +0000 (0:00:00.221) 0:00:40.834 ********** 2026-03-08 00:26:48.383677 | orchestrator | ok: [testbed-manager] 2026-03-08 00:26:48.383683 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:26:48.383688 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:26:48.383694 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:26:48.383699 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:26:48.383705 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:26:48.383710 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:26:48.383715 | orchestrator | 2026-03-08 00:26:48.383721 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-08 00:26:48.383726 | orchestrator | Sunday 08 March 2026 00:26:23 +0000 (0:00:00.231) 0:00:41.066 ********** 2026-03-08 00:26:48.383732 | orchestrator | ok: [testbed-manager] 2026-03-08 00:26:48.383737 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:26:48.383742 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:26:48.383748 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:26:48.383753 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:26:48.383759 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:26:48.383764 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:26:48.383770 | orchestrator | 2026-03-08 00:26:48.383776 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-08 00:26:48.383782 | orchestrator | Sunday 08 March 2026 00:26:23 +0000 (0:00:00.222) 0:00:41.289 ********** 2026-03-08 00:26:48.383790 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:26:48.383797 | orchestrator | 2026-03-08 00:26:48.383803 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-08 00:26:48.383809 | orchestrator | Sunday 08 March 2026 00:26:24 +0000 (0:00:00.294) 0:00:41.584 ********** 2026-03-08 00:26:48.383814 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:26:48.383820 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:26:48.383825 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:26:48.383830 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:26:48.383836 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:26:48.383841 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:26:48.383847 | orchestrator | ok: [testbed-manager] 2026-03-08 00:26:48.383852 | orchestrator | 2026-03-08 00:26:48.383857 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-08 00:26:48.383863 | orchestrator | Sunday 08 March 2026 00:26:25 +0000 (0:00:01.567) 0:00:43.151 ********** 2026-03-08 00:26:48.383868 | orchestrator | changed: [testbed-manager] 2026-03-08 00:26:48.383874 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:26:48.383880 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:26:48.383885 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:26:48.383890 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:26:48.383896 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:26:48.383901 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:26:48.383907 | orchestrator | 2026-03-08 00:26:48.383912 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-08 00:26:48.383931 | orchestrator | Sunday 08 March 2026 00:26:26 +0000 (0:00:01.044) 0:00:44.195 ********** 2026-03-08 00:26:48.383937 | orchestrator | ok: [testbed-manager] 2026-03-08 00:26:48.383943 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:26:48.383948 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:26:48.383954 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:26:48.383959 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:26:48.383970 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:26:48.383975 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:26:48.383980 | orchestrator | 2026-03-08 00:26:48.383986 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-08 00:26:48.383992 | orchestrator | Sunday 08 March 2026 00:26:27 +0000 (0:00:00.772) 0:00:44.968 ********** 2026-03-08 00:26:48.383998 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:26:48.384005 | orchestrator | 2026-03-08 00:26:48.384010 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-08 00:26:48.384017 | orchestrator | Sunday 08 March 2026 00:26:27 +0000 (0:00:00.269) 0:00:45.238 ********** 2026-03-08 00:26:48.384022 | orchestrator | changed: [testbed-manager] 2026-03-08 00:26:48.384027 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:26:48.384033 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:26:48.384038 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:26:48.384043 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:26:48.384048 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:26:48.384054 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:26:48.384059 | orchestrator | 2026-03-08 00:26:48.384084 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-08 00:26:48.384090 | orchestrator | Sunday 08 March 2026 00:26:28 +0000 (0:00:01.019) 0:00:46.257 ********** 2026-03-08 00:26:48.384095 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:26:48.384100 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:26:48.384106 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:26:48.384111 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:26:48.384116 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:26:48.384122 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:26:48.384127 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:26:48.384133 | orchestrator | 2026-03-08 00:26:48.384138 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-08 00:26:48.384143 | orchestrator | Sunday 08 March 2026 00:26:29 +0000 (0:00:00.217) 0:00:46.475 ********** 2026-03-08 00:26:48.384149 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:26:48.384154 | orchestrator | 2026-03-08 00:26:48.384160 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-08 00:26:48.384165 | orchestrator | Sunday 08 March 2026 00:26:29 +0000 (0:00:00.299) 0:00:46.775 ********** 2026-03-08 00:26:48.384170 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:26:48.384176 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:26:48.384181 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:26:48.384186 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:26:48.384191 | orchestrator | ok: [testbed-manager] 2026-03-08 00:26:48.384196 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:26:48.384202 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:26:48.384208 | orchestrator | 2026-03-08 00:26:48.384214 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-08 00:26:48.384220 | orchestrator | Sunday 08 March 2026 00:26:31 +0000 (0:00:01.619) 0:00:48.394 ********** 2026-03-08 00:26:48.384225 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:26:48.384230 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:26:48.384235 | orchestrator | changed: [testbed-manager] 2026-03-08 00:26:48.384241 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:26:48.384378 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:26:48.384389 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:26:48.384395 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:26:48.384401 | orchestrator | 2026-03-08 00:26:48.384407 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-08 00:26:48.384424 | orchestrator | Sunday 08 March 2026 00:26:32 +0000 (0:00:01.208) 0:00:49.602 ********** 2026-03-08 00:26:48.384430 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:26:48.384435 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:26:48.384441 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:26:48.384447 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:26:48.384453 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:26:48.384459 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:26:48.384465 | orchestrator | changed: [testbed-manager] 2026-03-08 00:26:48.384470 | orchestrator | 2026-03-08 00:26:48.384476 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-08 00:26:48.384482 | orchestrator | Sunday 08 March 2026 00:26:45 +0000 (0:00:13.165) 0:01:02.767 ********** 2026-03-08 00:26:48.384488 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:26:48.384493 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:26:48.384499 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:26:48.384504 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:26:48.384510 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:26:48.384584 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:26:48.384591 | orchestrator | ok: [testbed-manager] 2026-03-08 00:26:48.384596 | orchestrator | 2026-03-08 00:26:48.384602 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-08 00:26:48.384608 | orchestrator | Sunday 08 March 2026 00:26:46 +0000 (0:00:01.291) 0:01:04.059 ********** 2026-03-08 00:26:48.384614 | orchestrator | ok: [testbed-manager] 2026-03-08 00:26:48.384619 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:26:48.384624 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:26:48.384630 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:26:48.384635 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:26:48.384640 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:26:48.384645 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:26:48.384651 | orchestrator | 2026-03-08 00:26:48.384656 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-08 00:26:48.384662 | orchestrator | Sunday 08 March 2026 00:26:47 +0000 (0:00:00.919) 0:01:04.979 ********** 2026-03-08 00:26:48.384674 | orchestrator | ok: [testbed-manager] 2026-03-08 00:26:48.384680 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:26:48.384685 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:26:48.384690 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:26:48.384696 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:26:48.384702 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:26:48.384708 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:26:48.384713 | orchestrator | 2026-03-08 00:26:48.384719 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-08 00:26:48.384725 | orchestrator | Sunday 08 March 2026 00:26:47 +0000 (0:00:00.251) 0:01:05.231 ********** 2026-03-08 00:26:48.384731 | orchestrator | ok: [testbed-manager] 2026-03-08 00:26:48.384736 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:26:48.384741 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:26:48.384747 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:26:48.384752 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:26:48.384757 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:26:48.384762 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:26:48.384768 | orchestrator | 2026-03-08 00:26:48.384773 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-08 00:26:48.384779 | orchestrator | Sunday 08 March 2026 00:26:48 +0000 (0:00:00.229) 0:01:05.460 ********** 2026-03-08 00:26:48.384786 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:26:48.384793 | orchestrator | 2026-03-08 00:26:48.384810 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-08 00:29:04.879765 | orchestrator | Sunday 08 March 2026 00:26:48 +0000 (0:00:00.297) 0:01:05.758 ********** 2026-03-08 00:29:04.879951 | orchestrator | ok: [testbed-manager] 2026-03-08 00:29:04.879968 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:29:04.879978 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:29:04.879988 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:29:04.879998 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:29:04.880008 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:29:04.880017 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:29:04.880027 | orchestrator | 2026-03-08 00:29:04.880037 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-08 00:29:04.880048 | orchestrator | Sunday 08 March 2026 00:26:50 +0000 (0:00:01.667) 0:01:07.425 ********** 2026-03-08 00:29:04.880058 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:29:04.880070 | orchestrator | changed: [testbed-manager] 2026-03-08 00:29:04.880079 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:29:04.880089 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:29:04.880098 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:29:04.880108 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:29:04.880117 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:29:04.880127 | orchestrator | 2026-03-08 00:29:04.880136 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-08 00:29:04.880147 | orchestrator | Sunday 08 March 2026 00:26:50 +0000 (0:00:00.577) 0:01:08.002 ********** 2026-03-08 00:29:04.880156 | orchestrator | ok: [testbed-manager] 2026-03-08 00:29:04.880166 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:29:04.880175 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:29:04.880185 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:29:04.880194 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:29:04.880204 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:29:04.880213 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:29:04.880223 | orchestrator | 2026-03-08 00:29:04.880235 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-08 00:29:04.880248 | orchestrator | Sunday 08 March 2026 00:26:50 +0000 (0:00:00.230) 0:01:08.233 ********** 2026-03-08 00:29:04.880319 | orchestrator | ok: [testbed-manager] 2026-03-08 00:29:04.880331 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:29:04.880342 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:29:04.880353 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:29:04.880365 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:29:04.880376 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:29:04.880387 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:29:04.880398 | orchestrator | 2026-03-08 00:29:04.880409 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-08 00:29:04.880421 | orchestrator | Sunday 08 March 2026 00:26:52 +0000 (0:00:01.264) 0:01:09.498 ********** 2026-03-08 00:29:04.880432 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:29:04.880443 | orchestrator | changed: [testbed-manager] 2026-03-08 00:29:04.880454 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:29:04.880464 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:29:04.880476 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:29:04.880487 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:29:04.880499 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:29:04.880510 | orchestrator | 2026-03-08 00:29:04.880521 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-08 00:29:04.880536 | orchestrator | Sunday 08 March 2026 00:26:53 +0000 (0:00:01.809) 0:01:11.307 ********** 2026-03-08 00:29:04.880546 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:29:04.880562 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:29:04.880580 | orchestrator | ok: [testbed-manager] 2026-03-08 00:29:04.880597 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:29:04.880613 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:29:04.880630 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:29:04.880646 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:29:04.880663 | orchestrator | 2026-03-08 00:29:04.880680 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-08 00:29:04.880698 | orchestrator | Sunday 08 March 2026 00:26:56 +0000 (0:00:02.460) 0:01:13.767 ********** 2026-03-08 00:29:04.880728 | orchestrator | ok: [testbed-manager] 2026-03-08 00:29:04.880746 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:29:04.880763 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:29:04.880777 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:29:04.880787 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:29:04.880796 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:29:04.880805 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:29:04.880815 | orchestrator | 2026-03-08 00:29:04.880824 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-08 00:29:04.880834 | orchestrator | Sunday 08 March 2026 00:27:26 +0000 (0:00:30.482) 0:01:44.250 ********** 2026-03-08 00:29:04.880843 | orchestrator | changed: [testbed-manager] 2026-03-08 00:29:04.880853 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:29:04.880864 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:29:04.880873 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:29:04.880883 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:29:04.880892 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:29:04.880902 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:29:04.880911 | orchestrator | 2026-03-08 00:29:04.880921 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-08 00:29:04.880931 | orchestrator | Sunday 08 March 2026 00:28:50 +0000 (0:01:23.574) 0:03:07.825 ********** 2026-03-08 00:29:04.880940 | orchestrator | ok: [testbed-manager] 2026-03-08 00:29:04.880950 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:29:04.880960 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:29:04.880969 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:29:04.880979 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:29:04.880988 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:29:04.880998 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:29:04.881007 | orchestrator | 2026-03-08 00:29:04.881017 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-08 00:29:04.881026 | orchestrator | Sunday 08 March 2026 00:28:52 +0000 (0:00:01.777) 0:03:09.602 ********** 2026-03-08 00:29:04.881035 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:29:04.881045 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:29:04.881055 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:29:04.881064 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:29:04.881073 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:29:04.881082 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:29:04.881092 | orchestrator | changed: [testbed-manager] 2026-03-08 00:29:04.881101 | orchestrator | 2026-03-08 00:29:04.881123 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-08 00:29:04.881133 | orchestrator | Sunday 08 March 2026 00:29:03 +0000 (0:00:11.505) 0:03:21.108 ********** 2026-03-08 00:29:04.881188 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-08 00:29:04.881226 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-08 00:29:04.881242 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-08 00:29:04.881284 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-08 00:29:04.881297 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-08 00:29:04.881308 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-08 00:29:04.881319 | orchestrator | 2026-03-08 00:29:04.881330 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-08 00:29:04.881342 | orchestrator | Sunday 08 March 2026 00:29:04 +0000 (0:00:00.401) 0:03:21.509 ********** 2026-03-08 00:29:04.881353 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-08 00:29:04.881364 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:29:04.881376 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-08 00:29:04.881387 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:29:04.881398 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-08 00:29:04.881409 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:29:04.881425 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-08 00:29:04.881437 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:29:04.881448 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-08 00:29:04.881459 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-08 00:29:04.881470 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-08 00:29:04.881481 | orchestrator | 2026-03-08 00:29:04.881492 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-08 00:29:04.881503 | orchestrator | Sunday 08 March 2026 00:29:04 +0000 (0:00:00.674) 0:03:22.183 ********** 2026-03-08 00:29:04.881514 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-08 00:29:04.881527 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-08 00:29:04.881538 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-08 00:29:04.881549 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-08 00:29:04.881560 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-08 00:29:04.881579 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-08 00:29:10.705874 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-08 00:29:10.705985 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-08 00:29:10.706001 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-08 00:29:10.706097 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-08 00:29:10.706110 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-08 00:29:10.706120 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-08 00:29:10.706130 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-08 00:29:10.706139 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-08 00:29:10.706149 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-08 00:29:10.706159 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-08 00:29:10.706169 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-08 00:29:10.706179 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-08 00:29:10.706189 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-08 00:29:10.706199 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-08 00:29:10.706208 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-08 00:29:10.706218 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-08 00:29:10.706228 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-08 00:29:10.706238 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-08 00:29:10.706309 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-08 00:29:10.706319 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-08 00:29:10.706329 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-08 00:29:10.706339 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-08 00:29:10.706348 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-08 00:29:10.706358 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-08 00:29:10.706369 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:29:10.706380 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:29:10.706392 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-08 00:29:10.706404 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-08 00:29:10.706415 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-08 00:29:10.706425 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-08 00:29:10.706452 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-08 00:29:10.706498 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-08 00:29:10.706521 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-08 00:29:10.706531 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-08 00:29:10.706540 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-08 00:29:10.706549 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-08 00:29:10.706569 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:29:10.706579 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:29:10.706589 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-08 00:29:10.706598 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-08 00:29:10.706608 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-08 00:29:10.706617 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-08 00:29:10.706627 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-08 00:29:10.706656 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-08 00:29:10.706666 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-08 00:29:10.706676 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-08 00:29:10.706686 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-08 00:29:10.706695 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-08 00:29:10.706705 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-08 00:29:10.706714 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-08 00:29:10.706736 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-08 00:29:10.706746 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-08 00:29:10.706756 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-08 00:29:10.706765 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-08 00:29:10.706775 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-08 00:29:10.706784 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-08 00:29:10.706794 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-08 00:29:10.706804 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-08 00:29:10.706813 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-08 00:29:10.706823 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-08 00:29:10.706833 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-08 00:29:10.706842 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-08 00:29:10.706852 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-08 00:29:10.706861 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-08 00:29:10.706871 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-08 00:29:10.706881 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-08 00:29:10.706890 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-08 00:29:10.706900 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-08 00:29:10.706911 | orchestrator | 2026-03-08 00:29:10.706921 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-08 00:29:10.706938 | orchestrator | Sunday 08 March 2026 00:29:08 +0000 (0:00:03.875) 0:03:26.059 ********** 2026-03-08 00:29:10.706948 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-08 00:29:10.706957 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-08 00:29:10.706967 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-08 00:29:10.706976 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-08 00:29:10.706991 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-08 00:29:10.707000 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-08 00:29:10.707010 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-08 00:29:10.707019 | orchestrator | 2026-03-08 00:29:10.707029 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-08 00:29:10.707039 | orchestrator | Sunday 08 March 2026 00:29:10 +0000 (0:00:01.531) 0:03:27.590 ********** 2026-03-08 00:29:10.707048 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-08 00:29:10.707058 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:29:10.707067 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-08 00:29:10.707077 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-08 00:29:10.707087 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:29:10.707096 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:29:10.707105 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-08 00:29:10.707115 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:29:10.707124 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-08 00:29:10.707134 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-08 00:29:10.707151 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-08 00:29:23.328405 | orchestrator | 2026-03-08 00:29:23.328516 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-08 00:29:23.328537 | orchestrator | Sunday 08 March 2026 00:29:10 +0000 (0:00:00.491) 0:03:28.082 ********** 2026-03-08 00:29:23.328551 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-08 00:29:23.328567 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-08 00:29:23.328582 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-08 00:29:23.328597 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:29:23.328614 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:29:23.328629 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-08 00:29:23.328644 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:29:23.328659 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:29:23.328674 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-08 00:29:23.328690 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-08 00:29:23.328704 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-08 00:29:23.328721 | orchestrator | 2026-03-08 00:29:23.328736 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-08 00:29:23.328751 | orchestrator | Sunday 08 March 2026 00:29:11 +0000 (0:00:00.558) 0:03:28.640 ********** 2026-03-08 00:29:23.328796 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-08 00:29:23.328813 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:29:23.328826 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-08 00:29:23.328842 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-08 00:29:23.328856 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:29:23.328870 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:29:23.328885 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-08 00:29:23.328900 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:29:23.328915 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-08 00:29:23.328930 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-08 00:29:23.328947 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-08 00:29:23.328962 | orchestrator | 2026-03-08 00:29:23.328978 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-08 00:29:23.328993 | orchestrator | Sunday 08 March 2026 00:29:11 +0000 (0:00:00.539) 0:03:29.179 ********** 2026-03-08 00:29:23.329007 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:29:23.329022 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:29:23.329035 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:29:23.329049 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:29:23.329064 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:29:23.329097 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:29:23.329113 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:29:23.329127 | orchestrator | 2026-03-08 00:29:23.329141 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-08 00:29:23.329155 | orchestrator | Sunday 08 March 2026 00:29:12 +0000 (0:00:00.277) 0:03:29.457 ********** 2026-03-08 00:29:23.329169 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:29:23.329183 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:29:23.329196 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:29:23.329209 | orchestrator | ok: [testbed-manager] 2026-03-08 00:29:23.329244 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:29:23.329257 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:29:23.329270 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:29:23.329283 | orchestrator | 2026-03-08 00:29:23.329296 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-08 00:29:23.329310 | orchestrator | Sunday 08 March 2026 00:29:17 +0000 (0:00:05.559) 0:03:35.016 ********** 2026-03-08 00:29:23.329324 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-08 00:29:23.329337 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:29:23.329350 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-08 00:29:23.329363 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-08 00:29:23.329376 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:29:23.329390 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-08 00:29:23.329403 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:29:23.329417 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-08 00:29:23.329430 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:29:23.329444 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:29:23.329475 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-08 00:29:23.329490 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:29:23.329503 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-08 00:29:23.329515 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:29:23.329527 | orchestrator | 2026-03-08 00:29:23.329538 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-08 00:29:23.329561 | orchestrator | Sunday 08 March 2026 00:29:17 +0000 (0:00:00.291) 0:03:35.307 ********** 2026-03-08 00:29:23.329575 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-08 00:29:23.329588 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-08 00:29:23.329602 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-08 00:29:23.329637 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-08 00:29:23.329653 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-08 00:29:23.329666 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-08 00:29:23.329679 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-08 00:29:23.329693 | orchestrator | 2026-03-08 00:29:23.329707 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-08 00:29:23.329720 | orchestrator | Sunday 08 March 2026 00:29:18 +0000 (0:00:01.067) 0:03:36.374 ********** 2026-03-08 00:29:23.329735 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:29:23.329750 | orchestrator | 2026-03-08 00:29:23.329764 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-08 00:29:23.329777 | orchestrator | Sunday 08 March 2026 00:29:19 +0000 (0:00:00.374) 0:03:36.749 ********** 2026-03-08 00:29:23.329790 | orchestrator | ok: [testbed-manager] 2026-03-08 00:29:23.329804 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:29:23.329815 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:29:23.329823 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:29:23.329830 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:29:23.329838 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:29:23.329846 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:29:23.329854 | orchestrator | 2026-03-08 00:29:23.329862 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-08 00:29:23.329870 | orchestrator | Sunday 08 March 2026 00:29:20 +0000 (0:00:01.262) 0:03:38.012 ********** 2026-03-08 00:29:23.329877 | orchestrator | ok: [testbed-manager] 2026-03-08 00:29:23.329885 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:29:23.329893 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:29:23.329901 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:29:23.329908 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:29:23.329916 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:29:23.329924 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:29:23.329931 | orchestrator | 2026-03-08 00:29:23.329939 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-08 00:29:23.329947 | orchestrator | Sunday 08 March 2026 00:29:21 +0000 (0:00:00.605) 0:03:38.617 ********** 2026-03-08 00:29:23.329955 | orchestrator | changed: [testbed-manager] 2026-03-08 00:29:23.329963 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:29:23.329971 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:29:23.329979 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:29:23.329987 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:29:23.329995 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:29:23.330002 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:29:23.330010 | orchestrator | 2026-03-08 00:29:23.330078 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-08 00:29:23.330086 | orchestrator | Sunday 08 March 2026 00:29:21 +0000 (0:00:00.579) 0:03:39.197 ********** 2026-03-08 00:29:23.330095 | orchestrator | ok: [testbed-manager] 2026-03-08 00:29:23.330103 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:29:23.330110 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:29:23.330118 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:29:23.330126 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:29:23.330134 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:29:23.330142 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:29:23.330149 | orchestrator | 2026-03-08 00:29:23.330157 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-08 00:29:23.330165 | orchestrator | Sunday 08 March 2026 00:29:22 +0000 (0:00:00.582) 0:03:39.780 ********** 2026-03-08 00:29:23.330190 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772928287.4590902, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 00:29:23.330202 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772928309.243622, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 00:29:23.330211 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772928311.2183177, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 00:29:23.330272 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772928307.658064, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 00:29:28.138365 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772928300.6766694, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 00:29:28.138452 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772928312.0680826, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 00:29:28.138462 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772928318.320942, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 00:29:28.138490 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 00:29:28.138509 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 00:29:28.138516 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 00:29:28.138523 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 00:29:28.138549 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 00:29:28.138557 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 00:29:28.138563 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 00:29:28.138575 | orchestrator | 2026-03-08 00:29:28.138583 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-08 00:29:28.138591 | orchestrator | Sunday 08 March 2026 00:29:23 +0000 (0:00:00.924) 0:03:40.704 ********** 2026-03-08 00:29:28.138598 | orchestrator | changed: [testbed-manager] 2026-03-08 00:29:28.138605 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:29:28.138611 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:29:28.138617 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:29:28.138623 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:29:28.138630 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:29:28.138637 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:29:28.138643 | orchestrator | 2026-03-08 00:29:28.138649 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-08 00:29:28.138655 | orchestrator | Sunday 08 March 2026 00:29:24 +0000 (0:00:01.085) 0:03:41.790 ********** 2026-03-08 00:29:28.138662 | orchestrator | changed: [testbed-manager] 2026-03-08 00:29:28.138668 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:29:28.138674 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:29:28.138680 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:29:28.138686 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:29:28.138692 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:29:28.138698 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:29:28.138704 | orchestrator | 2026-03-08 00:29:28.138714 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-08 00:29:28.138721 | orchestrator | Sunday 08 March 2026 00:29:25 +0000 (0:00:01.181) 0:03:42.972 ********** 2026-03-08 00:29:28.138727 | orchestrator | changed: [testbed-manager] 2026-03-08 00:29:28.138733 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:29:28.138739 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:29:28.138745 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:29:28.138751 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:29:28.138757 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:29:28.138763 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:29:28.138769 | orchestrator | 2026-03-08 00:29:28.138776 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-08 00:29:28.138782 | orchestrator | Sunday 08 March 2026 00:29:26 +0000 (0:00:01.157) 0:03:44.129 ********** 2026-03-08 00:29:28.138788 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:29:28.138794 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:29:28.138800 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:29:28.138806 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:29:28.138812 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:29:28.138818 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:29:28.138824 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:29:28.138830 | orchestrator | 2026-03-08 00:29:28.138837 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-08 00:29:28.138843 | orchestrator | Sunday 08 March 2026 00:29:27 +0000 (0:00:00.276) 0:03:44.406 ********** 2026-03-08 00:29:28.138849 | orchestrator | ok: [testbed-manager] 2026-03-08 00:29:28.138856 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:29:28.138862 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:29:28.138868 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:29:28.138874 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:29:28.138880 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:29:28.138886 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:29:28.138892 | orchestrator | 2026-03-08 00:29:28.138898 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-08 00:29:28.138904 | orchestrator | Sunday 08 March 2026 00:29:27 +0000 (0:00:00.716) 0:03:45.122 ********** 2026-03-08 00:29:28.138912 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:29:28.138924 | orchestrator | 2026-03-08 00:29:28.138930 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-08 00:29:28.138940 | orchestrator | Sunday 08 March 2026 00:29:28 +0000 (0:00:00.397) 0:03:45.519 ********** 2026-03-08 00:30:45.716300 | orchestrator | ok: [testbed-manager] 2026-03-08 00:30:45.716431 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:30:45.716442 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:30:45.716449 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:30:45.716456 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:30:45.716462 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:30:45.716468 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:30:45.716474 | orchestrator | 2026-03-08 00:30:45.716483 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-08 00:30:45.716491 | orchestrator | Sunday 08 March 2026 00:29:36 +0000 (0:00:08.086) 0:03:53.606 ********** 2026-03-08 00:30:45.716497 | orchestrator | ok: [testbed-manager] 2026-03-08 00:30:45.716503 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:30:45.716509 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:30:45.716515 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:30:45.716522 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:30:45.716528 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:30:45.716534 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:30:45.716540 | orchestrator | 2026-03-08 00:30:45.716546 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-08 00:30:45.716553 | orchestrator | Sunday 08 March 2026 00:29:37 +0000 (0:00:01.313) 0:03:54.919 ********** 2026-03-08 00:30:45.716560 | orchestrator | ok: [testbed-manager] 2026-03-08 00:30:45.716566 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:30:45.716572 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:30:45.716578 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:30:45.716584 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:30:45.716590 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:30:45.716597 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:30:45.716603 | orchestrator | 2026-03-08 00:30:45.716609 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-08 00:30:45.716615 | orchestrator | Sunday 08 March 2026 00:29:38 +0000 (0:00:01.125) 0:03:56.045 ********** 2026-03-08 00:30:45.716621 | orchestrator | ok: [testbed-manager] 2026-03-08 00:30:45.716628 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:30:45.716634 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:30:45.716640 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:30:45.716647 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:30:45.716653 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:30:45.716660 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:30:45.716666 | orchestrator | 2026-03-08 00:30:45.716673 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-08 00:30:45.716681 | orchestrator | Sunday 08 March 2026 00:29:38 +0000 (0:00:00.264) 0:03:56.309 ********** 2026-03-08 00:30:45.716687 | orchestrator | ok: [testbed-manager] 2026-03-08 00:30:45.716693 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:30:45.716700 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:30:45.716706 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:30:45.716712 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:30:45.716718 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:30:45.716725 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:30:45.716731 | orchestrator | 2026-03-08 00:30:45.716738 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-08 00:30:45.716744 | orchestrator | Sunday 08 March 2026 00:29:39 +0000 (0:00:00.273) 0:03:56.583 ********** 2026-03-08 00:30:45.716751 | orchestrator | ok: [testbed-manager] 2026-03-08 00:30:45.716757 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:30:45.716763 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:30:45.716770 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:30:45.716776 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:30:45.716782 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:30:45.716817 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:30:45.716823 | orchestrator | 2026-03-08 00:30:45.716829 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-08 00:30:45.716836 | orchestrator | Sunday 08 March 2026 00:29:39 +0000 (0:00:00.291) 0:03:56.874 ********** 2026-03-08 00:30:45.716842 | orchestrator | ok: [testbed-manager] 2026-03-08 00:30:45.716848 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:30:45.716854 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:30:45.716860 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:30:45.716866 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:30:45.716873 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:30:45.716878 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:30:45.716884 | orchestrator | 2026-03-08 00:30:45.716890 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-08 00:30:45.716897 | orchestrator | Sunday 08 March 2026 00:29:45 +0000 (0:00:05.576) 0:04:02.451 ********** 2026-03-08 00:30:45.716906 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:30:45.716915 | orchestrator | 2026-03-08 00:30:45.716921 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-08 00:30:45.716928 | orchestrator | Sunday 08 March 2026 00:29:45 +0000 (0:00:00.355) 0:04:02.807 ********** 2026-03-08 00:30:45.716934 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-08 00:30:45.716940 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-08 00:30:45.716947 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-08 00:30:45.716953 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:30:45.716959 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-08 00:30:45.716988 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-08 00:30:45.716994 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-08 00:30:45.717000 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:30:45.717006 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-08 00:30:45.717012 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:30:45.717018 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-08 00:30:45.717024 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-08 00:30:45.717030 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-08 00:30:45.717036 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:30:45.717042 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:30:45.717048 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-08 00:30:45.717075 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-08 00:30:45.717081 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:30:45.717087 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-08 00:30:45.717110 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-08 00:30:45.717116 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:30:45.717122 | orchestrator | 2026-03-08 00:30:45.717127 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-08 00:30:45.717133 | orchestrator | Sunday 08 March 2026 00:29:45 +0000 (0:00:00.324) 0:04:03.131 ********** 2026-03-08 00:30:45.717139 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:30:45.717145 | orchestrator | 2026-03-08 00:30:45.717151 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-08 00:30:45.717156 | orchestrator | Sunday 08 March 2026 00:29:46 +0000 (0:00:00.351) 0:04:03.483 ********** 2026-03-08 00:30:45.717168 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-08 00:30:45.717175 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-08 00:30:45.717180 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:30:45.717187 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-08 00:30:45.717193 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:30:45.717198 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:30:45.717204 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-08 00:30:45.717210 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-08 00:30:45.717216 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:30:45.717221 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-08 00:30:45.717227 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:30:45.717233 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:30:45.717239 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-08 00:30:45.717244 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:30:45.717250 | orchestrator | 2026-03-08 00:30:45.717256 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-08 00:30:45.717261 | orchestrator | Sunday 08 March 2026 00:29:46 +0000 (0:00:00.289) 0:04:03.772 ********** 2026-03-08 00:30:45.717267 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:30:45.717273 | orchestrator | 2026-03-08 00:30:45.717279 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-08 00:30:45.717284 | orchestrator | Sunday 08 March 2026 00:29:46 +0000 (0:00:00.386) 0:04:04.159 ********** 2026-03-08 00:30:45.717290 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:30:45.717296 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:30:45.717301 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:30:45.717306 | orchestrator | changed: [testbed-manager] 2026-03-08 00:30:45.717317 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:30:45.717322 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:30:45.717328 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:30:45.717333 | orchestrator | 2026-03-08 00:30:45.717339 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-08 00:30:45.717344 | orchestrator | Sunday 08 March 2026 00:30:20 +0000 (0:00:34.189) 0:04:38.349 ********** 2026-03-08 00:30:45.717350 | orchestrator | changed: [testbed-manager] 2026-03-08 00:30:45.717355 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:30:45.717361 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:30:45.717367 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:30:45.717372 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:30:45.717378 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:30:45.717383 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:30:45.717388 | orchestrator | 2026-03-08 00:30:45.717394 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-08 00:30:45.717399 | orchestrator | Sunday 08 March 2026 00:30:28 +0000 (0:00:07.941) 0:04:46.290 ********** 2026-03-08 00:30:45.717405 | orchestrator | changed: [testbed-manager] 2026-03-08 00:30:45.717410 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:30:45.717416 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:30:45.717421 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:30:45.717426 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:30:45.717432 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:30:45.717437 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:30:45.717443 | orchestrator | 2026-03-08 00:30:45.717448 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-08 00:30:45.717454 | orchestrator | Sunday 08 March 2026 00:30:36 +0000 (0:00:07.826) 0:04:54.116 ********** 2026-03-08 00:30:45.717465 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:30:45.717471 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:30:45.717476 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:30:45.717482 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:30:45.717488 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:30:45.717493 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:30:45.717498 | orchestrator | ok: [testbed-manager] 2026-03-08 00:30:45.717504 | orchestrator | 2026-03-08 00:30:45.717509 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-08 00:30:45.717515 | orchestrator | Sunday 08 March 2026 00:30:39 +0000 (0:00:02.277) 0:04:56.394 ********** 2026-03-08 00:30:45.717520 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:30:45.717525 | orchestrator | changed: [testbed-manager] 2026-03-08 00:30:45.717531 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:30:45.717536 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:30:45.717542 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:30:45.717547 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:30:45.717553 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:30:45.717558 | orchestrator | 2026-03-08 00:30:45.717570 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-08 00:30:56.506913 | orchestrator | Sunday 08 March 2026 00:30:45 +0000 (0:00:06.692) 0:05:03.086 ********** 2026-03-08 00:30:56.507832 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:30:56.507878 | orchestrator | 2026-03-08 00:30:56.507892 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-08 00:30:56.507903 | orchestrator | Sunday 08 March 2026 00:30:46 +0000 (0:00:00.412) 0:05:03.498 ********** 2026-03-08 00:30:56.507913 | orchestrator | changed: [testbed-manager] 2026-03-08 00:30:56.507925 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:30:56.507935 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:30:56.507944 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:30:56.507954 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:30:56.507963 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:30:56.507973 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:30:56.507982 | orchestrator | 2026-03-08 00:30:56.507992 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-08 00:30:56.508003 | orchestrator | Sunday 08 March 2026 00:30:46 +0000 (0:00:00.679) 0:05:04.177 ********** 2026-03-08 00:30:56.508012 | orchestrator | ok: [testbed-manager] 2026-03-08 00:30:56.508023 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:30:56.508033 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:30:56.508042 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:30:56.508052 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:30:56.508062 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:30:56.508071 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:30:56.508119 | orchestrator | 2026-03-08 00:30:56.508131 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-08 00:30:56.508141 | orchestrator | Sunday 08 March 2026 00:30:48 +0000 (0:00:01.805) 0:05:05.982 ********** 2026-03-08 00:30:56.508150 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:30:56.508160 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:30:56.508170 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:30:56.508179 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:30:56.508189 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:30:56.508198 | orchestrator | changed: [testbed-manager] 2026-03-08 00:30:56.508209 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:30:56.508219 | orchestrator | 2026-03-08 00:30:56.508228 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-08 00:30:56.508238 | orchestrator | Sunday 08 March 2026 00:30:49 +0000 (0:00:00.779) 0:05:06.762 ********** 2026-03-08 00:30:56.508248 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:30:56.508257 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:30:56.508294 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:30:56.508305 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:30:56.508315 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:30:56.508324 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:30:56.508334 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:30:56.508343 | orchestrator | 2026-03-08 00:30:56.508353 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-08 00:30:56.508363 | orchestrator | Sunday 08 March 2026 00:30:49 +0000 (0:00:00.245) 0:05:07.008 ********** 2026-03-08 00:30:56.508372 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:30:56.508382 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:30:56.508404 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:30:56.508415 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:30:56.508424 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:30:56.508434 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:30:56.508443 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:30:56.508453 | orchestrator | 2026-03-08 00:30:56.508462 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-08 00:30:56.508472 | orchestrator | Sunday 08 March 2026 00:30:49 +0000 (0:00:00.355) 0:05:07.363 ********** 2026-03-08 00:30:56.508482 | orchestrator | ok: [testbed-manager] 2026-03-08 00:30:56.508491 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:30:56.508501 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:30:56.508510 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:30:56.508520 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:30:56.508529 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:30:56.508539 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:30:56.508548 | orchestrator | 2026-03-08 00:30:56.508558 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-08 00:30:56.508568 | orchestrator | Sunday 08 March 2026 00:30:50 +0000 (0:00:00.289) 0:05:07.653 ********** 2026-03-08 00:30:56.508577 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:30:56.508587 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:30:56.508596 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:30:56.508606 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:30:56.508615 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:30:56.508625 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:30:56.508634 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:30:56.508644 | orchestrator | 2026-03-08 00:30:56.508653 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-08 00:30:56.508664 | orchestrator | Sunday 08 March 2026 00:30:50 +0000 (0:00:00.298) 0:05:07.952 ********** 2026-03-08 00:30:56.508674 | orchestrator | ok: [testbed-manager] 2026-03-08 00:30:56.508683 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:30:56.508693 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:30:56.508702 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:30:56.508712 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:30:56.508721 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:30:56.508731 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:30:56.508742 | orchestrator | 2026-03-08 00:30:56.508758 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-08 00:30:56.508780 | orchestrator | Sunday 08 March 2026 00:30:50 +0000 (0:00:00.299) 0:05:08.251 ********** 2026-03-08 00:30:56.508802 | orchestrator | ok: [testbed-manager] =>  2026-03-08 00:30:56.508818 | orchestrator |  docker_version: 5:27.5.1 2026-03-08 00:30:56.508833 | orchestrator | ok: [testbed-node-3] =>  2026-03-08 00:30:56.508847 | orchestrator |  docker_version: 5:27.5.1 2026-03-08 00:30:56.508863 | orchestrator | ok: [testbed-node-4] =>  2026-03-08 00:30:56.508878 | orchestrator |  docker_version: 5:27.5.1 2026-03-08 00:30:56.508891 | orchestrator | ok: [testbed-node-5] =>  2026-03-08 00:30:56.508905 | orchestrator |  docker_version: 5:27.5.1 2026-03-08 00:30:56.508951 | orchestrator | ok: [testbed-node-0] =>  2026-03-08 00:30:56.508969 | orchestrator |  docker_version: 5:27.5.1 2026-03-08 00:30:56.508999 | orchestrator | ok: [testbed-node-1] =>  2026-03-08 00:30:56.509016 | orchestrator |  docker_version: 5:27.5.1 2026-03-08 00:30:56.509026 | orchestrator | ok: [testbed-node-2] =>  2026-03-08 00:30:56.509036 | orchestrator |  docker_version: 5:27.5.1 2026-03-08 00:30:56.509045 | orchestrator | 2026-03-08 00:30:56.509055 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-08 00:30:56.509065 | orchestrator | Sunday 08 March 2026 00:30:51 +0000 (0:00:00.263) 0:05:08.515 ********** 2026-03-08 00:30:56.509074 | orchestrator | ok: [testbed-manager] =>  2026-03-08 00:30:56.509110 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-08 00:30:56.509120 | orchestrator | ok: [testbed-node-3] =>  2026-03-08 00:30:56.509129 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-08 00:30:56.509139 | orchestrator | ok: [testbed-node-4] =>  2026-03-08 00:30:56.509149 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-08 00:30:56.509158 | orchestrator | ok: [testbed-node-5] =>  2026-03-08 00:30:56.509167 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-08 00:30:56.509177 | orchestrator | ok: [testbed-node-0] =>  2026-03-08 00:30:56.509186 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-08 00:30:56.509196 | orchestrator | ok: [testbed-node-1] =>  2026-03-08 00:30:56.509205 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-08 00:30:56.509215 | orchestrator | ok: [testbed-node-2] =>  2026-03-08 00:30:56.509225 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-08 00:30:56.509234 | orchestrator | 2026-03-08 00:30:56.509244 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-08 00:30:56.509254 | orchestrator | Sunday 08 March 2026 00:30:51 +0000 (0:00:00.272) 0:05:08.788 ********** 2026-03-08 00:30:56.509263 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:30:56.509273 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:30:56.509283 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:30:56.509292 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:30:56.509302 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:30:56.509371 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:30:56.509383 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:30:56.509393 | orchestrator | 2026-03-08 00:30:56.509403 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-08 00:30:56.509413 | orchestrator | Sunday 08 March 2026 00:30:51 +0000 (0:00:00.263) 0:05:09.052 ********** 2026-03-08 00:30:56.509423 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:30:56.509432 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:30:56.509442 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:30:56.509451 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:30:56.509461 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:30:56.509470 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:30:56.509480 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:30:56.509489 | orchestrator | 2026-03-08 00:30:56.509499 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-08 00:30:56.509509 | orchestrator | Sunday 08 March 2026 00:30:51 +0000 (0:00:00.286) 0:05:09.339 ********** 2026-03-08 00:30:56.509521 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:30:56.509532 | orchestrator | 2026-03-08 00:30:56.509550 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-08 00:30:56.509560 | orchestrator | Sunday 08 March 2026 00:30:52 +0000 (0:00:00.406) 0:05:09.745 ********** 2026-03-08 00:30:56.509570 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:30:56.509580 | orchestrator | ok: [testbed-manager] 2026-03-08 00:30:56.509589 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:30:56.509599 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:30:56.509609 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:30:56.509618 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:30:56.509717 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:30:56.509738 | orchestrator | 2026-03-08 00:30:56.509748 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-08 00:30:56.509758 | orchestrator | Sunday 08 March 2026 00:30:53 +0000 (0:00:00.969) 0:05:10.714 ********** 2026-03-08 00:30:56.509767 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:30:56.509777 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:30:56.509787 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:30:56.509796 | orchestrator | ok: [testbed-manager] 2026-03-08 00:30:56.509806 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:30:56.509815 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:30:56.509825 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:30:56.509834 | orchestrator | 2026-03-08 00:30:56.509844 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-08 00:30:56.509855 | orchestrator | Sunday 08 March 2026 00:30:56 +0000 (0:00:02.799) 0:05:13.513 ********** 2026-03-08 00:30:56.509865 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-08 00:30:56.509875 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-08 00:30:56.509884 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-08 00:30:56.509894 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-08 00:30:56.509904 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-08 00:30:56.509914 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-08 00:30:56.509923 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:30:56.509933 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-08 00:30:56.509942 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-08 00:30:56.509952 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-08 00:30:56.509961 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:30:56.509971 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-08 00:30:56.509981 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-08 00:30:56.509990 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-08 00:30:56.510000 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:30:56.510010 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-08 00:30:56.510105 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-08 00:31:55.600443 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-08 00:31:55.600560 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:31:55.600579 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-08 00:31:55.600591 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-08 00:31:55.600603 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-08 00:31:55.600615 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:31:55.600627 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:31:55.600638 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-08 00:31:55.600650 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-08 00:31:55.600662 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-08 00:31:55.600674 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:31:55.600686 | orchestrator | 2026-03-08 00:31:55.600699 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-08 00:31:55.600711 | orchestrator | Sunday 08 March 2026 00:30:56 +0000 (0:00:00.566) 0:05:14.080 ********** 2026-03-08 00:31:55.600723 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:55.600735 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:55.600747 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:55.600758 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:55.600770 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:55.600782 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:55.600794 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:55.600806 | orchestrator | 2026-03-08 00:31:55.600818 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-08 00:31:55.600856 | orchestrator | Sunday 08 March 2026 00:31:03 +0000 (0:00:06.559) 0:05:20.639 ********** 2026-03-08 00:31:55.600869 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:55.600880 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:55.600892 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:55.600903 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:55.600915 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:55.600927 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:55.600938 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:55.600950 | orchestrator | 2026-03-08 00:31:55.600961 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-08 00:31:55.600973 | orchestrator | Sunday 08 March 2026 00:31:04 +0000 (0:00:01.031) 0:05:21.671 ********** 2026-03-08 00:31:55.600985 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:55.601053 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:55.601065 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:55.601075 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:55.601086 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:55.601097 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:55.601107 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:55.601118 | orchestrator | 2026-03-08 00:31:55.601128 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-08 00:31:55.601139 | orchestrator | Sunday 08 March 2026 00:31:12 +0000 (0:00:07.913) 0:05:29.584 ********** 2026-03-08 00:31:55.601150 | orchestrator | changed: [testbed-manager] 2026-03-08 00:31:55.601161 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:55.601171 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:55.601182 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:55.601192 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:55.601203 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:55.601214 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:55.601225 | orchestrator | 2026-03-08 00:31:55.601235 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-08 00:31:55.601246 | orchestrator | Sunday 08 March 2026 00:31:15 +0000 (0:00:03.266) 0:05:32.851 ********** 2026-03-08 00:31:55.601257 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:55.601267 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:55.601278 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:55.601289 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:55.601299 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:55.601310 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:55.601320 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:55.601331 | orchestrator | 2026-03-08 00:31:55.601342 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-08 00:31:55.601353 | orchestrator | Sunday 08 March 2026 00:31:17 +0000 (0:00:01.795) 0:05:34.646 ********** 2026-03-08 00:31:55.601364 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:55.601374 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:55.601385 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:55.601395 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:55.601406 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:55.601416 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:55.601440 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:55.601463 | orchestrator | 2026-03-08 00:31:55.601474 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-08 00:31:55.601485 | orchestrator | Sunday 08 March 2026 00:31:18 +0000 (0:00:01.544) 0:05:36.190 ********** 2026-03-08 00:31:55.601495 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:31:55.601506 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:31:55.601517 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:31:55.601527 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:31:55.601538 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:31:55.601548 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:31:55.601569 | orchestrator | changed: [testbed-manager] 2026-03-08 00:31:55.601580 | orchestrator | 2026-03-08 00:31:55.601590 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-08 00:31:55.601601 | orchestrator | Sunday 08 March 2026 00:31:19 +0000 (0:00:00.609) 0:05:36.800 ********** 2026-03-08 00:31:55.601612 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:55.601623 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:55.601633 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:55.601644 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:55.601654 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:55.601665 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:55.601675 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:55.601686 | orchestrator | 2026-03-08 00:31:55.601697 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-08 00:31:55.601725 | orchestrator | Sunday 08 March 2026 00:31:29 +0000 (0:00:09.631) 0:05:46.431 ********** 2026-03-08 00:31:55.601737 | orchestrator | changed: [testbed-manager] 2026-03-08 00:31:55.601747 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:55.601758 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:55.601769 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:55.601779 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:55.601790 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:55.601800 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:55.601811 | orchestrator | 2026-03-08 00:31:55.601822 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-08 00:31:55.601833 | orchestrator | Sunday 08 March 2026 00:31:29 +0000 (0:00:00.915) 0:05:47.346 ********** 2026-03-08 00:31:55.601844 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:55.601854 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:55.601865 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:55.601876 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:55.601886 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:55.601897 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:55.601907 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:55.601918 | orchestrator | 2026-03-08 00:31:55.601929 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-08 00:31:55.601940 | orchestrator | Sunday 08 March 2026 00:31:38 +0000 (0:00:08.358) 0:05:55.705 ********** 2026-03-08 00:31:55.601950 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:55.601961 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:55.601972 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:55.601983 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:55.602013 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:55.602093 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:55.602104 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:55.602115 | orchestrator | 2026-03-08 00:31:55.602127 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-08 00:31:55.602138 | orchestrator | Sunday 08 March 2026 00:31:49 +0000 (0:00:10.796) 0:06:06.501 ********** 2026-03-08 00:31:55.602149 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-08 00:31:55.602160 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-08 00:31:55.602170 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-08 00:31:55.602181 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-08 00:31:55.602192 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-08 00:31:55.602203 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-08 00:31:55.602213 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-08 00:31:55.602224 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-08 00:31:55.602235 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-08 00:31:55.602245 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-08 00:31:55.602303 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-08 00:31:55.602325 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-08 00:31:55.602336 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-08 00:31:55.602347 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-08 00:31:55.602358 | orchestrator | 2026-03-08 00:31:55.602369 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-08 00:31:55.602380 | orchestrator | Sunday 08 March 2026 00:31:50 +0000 (0:00:01.204) 0:06:07.706 ********** 2026-03-08 00:31:55.602395 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:31:55.602406 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:31:55.602417 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:31:55.602428 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:31:55.602438 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:31:55.602449 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:31:55.602460 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:31:55.602470 | orchestrator | 2026-03-08 00:31:55.602481 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-08 00:31:55.602492 | orchestrator | Sunday 08 March 2026 00:31:50 +0000 (0:00:00.514) 0:06:08.220 ********** 2026-03-08 00:31:55.602503 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:55.602514 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:55.602524 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:55.602535 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:55.602545 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:55.602556 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:55.602567 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:55.602578 | orchestrator | 2026-03-08 00:31:55.602589 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-08 00:31:55.602601 | orchestrator | Sunday 08 March 2026 00:31:54 +0000 (0:00:03.791) 0:06:12.012 ********** 2026-03-08 00:31:55.602611 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:31:55.602622 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:31:55.602633 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:31:55.602643 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:31:55.602654 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:31:55.602664 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:31:55.602675 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:31:55.602685 | orchestrator | 2026-03-08 00:31:55.602697 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-08 00:31:55.602708 | orchestrator | Sunday 08 March 2026 00:31:55 +0000 (0:00:00.508) 0:06:12.521 ********** 2026-03-08 00:31:55.602719 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-08 00:31:55.602730 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-08 00:31:55.602741 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:31:55.602752 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-08 00:31:55.602762 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-08 00:31:55.602773 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-08 00:31:55.602784 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-08 00:31:55.602795 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:31:55.602805 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-08 00:31:55.602825 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-08 00:32:14.466418 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:32:14.466544 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-08 00:32:14.466559 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-08 00:32:14.466569 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:32:14.466593 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-08 00:32:14.466655 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-08 00:32:14.466693 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:32:14.466711 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:32:14.466741 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-08 00:32:14.466760 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-08 00:32:14.466779 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:32:14.466797 | orchestrator | 2026-03-08 00:32:14.466817 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-08 00:32:14.466835 | orchestrator | Sunday 08 March 2026 00:31:55 +0000 (0:00:00.713) 0:06:13.235 ********** 2026-03-08 00:32:14.466852 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:32:14.466870 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:32:14.466889 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:32:14.466907 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:32:14.466927 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:32:14.466947 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:32:14.467000 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:32:14.467020 | orchestrator | 2026-03-08 00:32:14.467039 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-08 00:32:14.467058 | orchestrator | Sunday 08 March 2026 00:31:56 +0000 (0:00:00.499) 0:06:13.734 ********** 2026-03-08 00:32:14.467076 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:32:14.467095 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:32:14.467114 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:32:14.467134 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:32:14.467152 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:32:14.467171 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:32:14.467189 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:32:14.467207 | orchestrator | 2026-03-08 00:32:14.467227 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-08 00:32:14.467245 | orchestrator | Sunday 08 March 2026 00:31:56 +0000 (0:00:00.482) 0:06:14.217 ********** 2026-03-08 00:32:14.467263 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:32:14.467283 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:32:14.467301 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:32:14.467321 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:32:14.467332 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:32:14.467343 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:32:14.467354 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:32:14.467365 | orchestrator | 2026-03-08 00:32:14.467376 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-08 00:32:14.467387 | orchestrator | Sunday 08 March 2026 00:31:57 +0000 (0:00:00.498) 0:06:14.716 ********** 2026-03-08 00:32:14.467398 | orchestrator | ok: [testbed-manager] 2026-03-08 00:32:14.467409 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:32:14.467420 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:32:14.467431 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:32:14.467442 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:32:14.467452 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:32:14.467463 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:32:14.467474 | orchestrator | 2026-03-08 00:32:14.467485 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-08 00:32:14.467496 | orchestrator | Sunday 08 March 2026 00:31:59 +0000 (0:00:01.933) 0:06:16.649 ********** 2026-03-08 00:32:14.467508 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:32:14.467522 | orchestrator | 2026-03-08 00:32:14.467534 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-08 00:32:14.467545 | orchestrator | Sunday 08 March 2026 00:32:00 +0000 (0:00:00.883) 0:06:17.532 ********** 2026-03-08 00:32:14.467556 | orchestrator | ok: [testbed-manager] 2026-03-08 00:32:14.467588 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:32:14.467600 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:32:14.467611 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:32:14.467621 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:32:14.467645 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:32:14.467656 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:32:14.467667 | orchestrator | 2026-03-08 00:32:14.467678 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-08 00:32:14.467689 | orchestrator | Sunday 08 March 2026 00:32:00 +0000 (0:00:00.830) 0:06:18.363 ********** 2026-03-08 00:32:14.467700 | orchestrator | ok: [testbed-manager] 2026-03-08 00:32:14.467711 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:32:14.467721 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:32:14.467732 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:32:14.467742 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:32:14.467753 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:32:14.467764 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:32:14.467774 | orchestrator | 2026-03-08 00:32:14.467785 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-08 00:32:14.467796 | orchestrator | Sunday 08 March 2026 00:32:01 +0000 (0:00:00.827) 0:06:19.190 ********** 2026-03-08 00:32:14.467807 | orchestrator | ok: [testbed-manager] 2026-03-08 00:32:14.467817 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:32:14.467828 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:32:14.467839 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:32:14.467850 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:32:14.467860 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:32:14.467871 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:32:14.467882 | orchestrator | 2026-03-08 00:32:14.467892 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-08 00:32:14.467924 | orchestrator | Sunday 08 March 2026 00:32:03 +0000 (0:00:01.493) 0:06:20.684 ********** 2026-03-08 00:32:14.467936 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:32:14.467946 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:32:14.467989 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:32:14.468002 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:32:14.468013 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:32:14.468023 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:32:14.468034 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:32:14.468045 | orchestrator | 2026-03-08 00:32:14.468055 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-08 00:32:14.468066 | orchestrator | Sunday 08 March 2026 00:32:04 +0000 (0:00:01.284) 0:06:21.968 ********** 2026-03-08 00:32:14.468077 | orchestrator | ok: [testbed-manager] 2026-03-08 00:32:14.468088 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:32:14.468099 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:32:14.468110 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:32:14.468120 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:32:14.468131 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:32:14.468142 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:32:14.468153 | orchestrator | 2026-03-08 00:32:14.468164 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-08 00:32:14.468175 | orchestrator | Sunday 08 March 2026 00:32:05 +0000 (0:00:01.336) 0:06:23.304 ********** 2026-03-08 00:32:14.468185 | orchestrator | changed: [testbed-manager] 2026-03-08 00:32:14.468196 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:32:14.468207 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:32:14.468217 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:32:14.468228 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:32:14.468239 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:32:14.468249 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:32:14.468260 | orchestrator | 2026-03-08 00:32:14.468271 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-08 00:32:14.468282 | orchestrator | Sunday 08 March 2026 00:32:07 +0000 (0:00:01.381) 0:06:24.686 ********** 2026-03-08 00:32:14.468301 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:32:14.468313 | orchestrator | 2026-03-08 00:32:14.468323 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-08 00:32:14.468334 | orchestrator | Sunday 08 March 2026 00:32:08 +0000 (0:00:01.040) 0:06:25.726 ********** 2026-03-08 00:32:14.468345 | orchestrator | ok: [testbed-manager] 2026-03-08 00:32:14.468356 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:32:14.468367 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:32:14.468377 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:32:14.468388 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:32:14.468399 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:32:14.468409 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:32:14.468420 | orchestrator | 2026-03-08 00:32:14.468431 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-08 00:32:14.468443 | orchestrator | Sunday 08 March 2026 00:32:09 +0000 (0:00:01.274) 0:06:27.001 ********** 2026-03-08 00:32:14.468463 | orchestrator | ok: [testbed-manager] 2026-03-08 00:32:14.468487 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:32:14.468515 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:32:14.468534 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:32:14.468552 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:32:14.468590 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:32:14.468608 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:32:14.468628 | orchestrator | 2026-03-08 00:32:14.468646 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-08 00:32:14.468665 | orchestrator | Sunday 08 March 2026 00:32:10 +0000 (0:00:01.132) 0:06:28.133 ********** 2026-03-08 00:32:14.468683 | orchestrator | ok: [testbed-manager] 2026-03-08 00:32:14.468702 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:32:14.468721 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:32:14.468740 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:32:14.468760 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:32:14.468780 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:32:14.468799 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:32:14.468811 | orchestrator | 2026-03-08 00:32:14.468822 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-08 00:32:14.468833 | orchestrator | Sunday 08 March 2026 00:32:11 +0000 (0:00:01.116) 0:06:29.249 ********** 2026-03-08 00:32:14.468844 | orchestrator | ok: [testbed-manager] 2026-03-08 00:32:14.468854 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:32:14.468881 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:32:14.468893 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:32:14.468903 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:32:14.468914 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:32:14.468925 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:32:14.468935 | orchestrator | 2026-03-08 00:32:14.468946 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-08 00:32:14.468957 | orchestrator | Sunday 08 March 2026 00:32:13 +0000 (0:00:01.369) 0:06:30.619 ********** 2026-03-08 00:32:14.469001 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:32:14.469013 | orchestrator | 2026-03-08 00:32:14.469024 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-08 00:32:14.469035 | orchestrator | Sunday 08 March 2026 00:32:14 +0000 (0:00:00.925) 0:06:31.545 ********** 2026-03-08 00:32:14.469046 | orchestrator | 2026-03-08 00:32:14.469057 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-08 00:32:14.469068 | orchestrator | Sunday 08 March 2026 00:32:14 +0000 (0:00:00.038) 0:06:31.583 ********** 2026-03-08 00:32:14.469078 | orchestrator | 2026-03-08 00:32:14.469104 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-08 00:32:14.469122 | orchestrator | Sunday 08 March 2026 00:32:14 +0000 (0:00:00.046) 0:06:31.629 ********** 2026-03-08 00:32:14.469139 | orchestrator | 2026-03-08 00:32:14.469157 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-08 00:32:14.469174 | orchestrator | Sunday 08 March 2026 00:32:14 +0000 (0:00:00.037) 0:06:31.666 ********** 2026-03-08 00:32:14.469190 | orchestrator | 2026-03-08 00:32:14.469225 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-08 00:32:39.751170 | orchestrator | Sunday 08 March 2026 00:32:14 +0000 (0:00:00.038) 0:06:31.705 ********** 2026-03-08 00:32:39.751283 | orchestrator | 2026-03-08 00:32:39.751300 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-08 00:32:39.751313 | orchestrator | Sunday 08 March 2026 00:32:14 +0000 (0:00:00.045) 0:06:31.750 ********** 2026-03-08 00:32:39.751324 | orchestrator | 2026-03-08 00:32:39.751335 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-08 00:32:39.751346 | orchestrator | Sunday 08 March 2026 00:32:14 +0000 (0:00:00.037) 0:06:31.788 ********** 2026-03-08 00:32:39.751357 | orchestrator | 2026-03-08 00:32:39.751368 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-08 00:32:39.751379 | orchestrator | Sunday 08 March 2026 00:32:14 +0000 (0:00:00.038) 0:06:31.827 ********** 2026-03-08 00:32:39.751390 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:32:39.751402 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:32:39.751413 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:32:39.751424 | orchestrator | 2026-03-08 00:32:39.751435 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-08 00:32:39.751446 | orchestrator | Sunday 08 March 2026 00:32:15 +0000 (0:00:01.171) 0:06:32.998 ********** 2026-03-08 00:32:39.751457 | orchestrator | changed: [testbed-manager] 2026-03-08 00:32:39.751468 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:32:39.751479 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:32:39.751490 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:32:39.751500 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:32:39.751511 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:32:39.751522 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:32:39.751533 | orchestrator | 2026-03-08 00:32:39.751544 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-08 00:32:39.751555 | orchestrator | Sunday 08 March 2026 00:32:16 +0000 (0:00:01.371) 0:06:34.369 ********** 2026-03-08 00:32:39.751566 | orchestrator | changed: [testbed-manager] 2026-03-08 00:32:39.751576 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:32:39.751587 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:32:39.751598 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:32:39.751608 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:32:39.751619 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:32:39.751630 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:32:39.751641 | orchestrator | 2026-03-08 00:32:39.751652 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-08 00:32:39.751662 | orchestrator | Sunday 08 March 2026 00:32:18 +0000 (0:00:01.376) 0:06:35.746 ********** 2026-03-08 00:32:39.751673 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:32:39.751684 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:32:39.751695 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:32:39.751705 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:32:39.751716 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:32:39.751727 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:32:39.751738 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:32:39.751748 | orchestrator | 2026-03-08 00:32:39.751759 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-08 00:32:39.751770 | orchestrator | Sunday 08 March 2026 00:32:20 +0000 (0:00:02.243) 0:06:37.989 ********** 2026-03-08 00:32:39.751797 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:32:39.751832 | orchestrator | 2026-03-08 00:32:39.751844 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-08 00:32:39.751856 | orchestrator | Sunday 08 March 2026 00:32:20 +0000 (0:00:00.104) 0:06:38.094 ********** 2026-03-08 00:32:39.751867 | orchestrator | ok: [testbed-manager] 2026-03-08 00:32:39.751878 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:32:39.751889 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:32:39.751899 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:32:39.751910 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:32:39.751941 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:32:39.751953 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:32:39.751964 | orchestrator | 2026-03-08 00:32:39.751975 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-08 00:32:39.751986 | orchestrator | Sunday 08 March 2026 00:32:21 +0000 (0:00:01.027) 0:06:39.122 ********** 2026-03-08 00:32:39.751997 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:32:39.752008 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:32:39.752018 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:32:39.752029 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:32:39.752039 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:32:39.752050 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:32:39.752060 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:32:39.752071 | orchestrator | 2026-03-08 00:32:39.752082 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-08 00:32:39.752093 | orchestrator | Sunday 08 March 2026 00:32:22 +0000 (0:00:00.541) 0:06:39.663 ********** 2026-03-08 00:32:39.752104 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:32:39.752118 | orchestrator | 2026-03-08 00:32:39.752129 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-08 00:32:39.752140 | orchestrator | Sunday 08 March 2026 00:32:23 +0000 (0:00:01.072) 0:06:40.736 ********** 2026-03-08 00:32:39.752150 | orchestrator | ok: [testbed-manager] 2026-03-08 00:32:39.752161 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:32:39.752172 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:32:39.752183 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:32:39.752194 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:32:39.752204 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:32:39.752215 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:32:39.752226 | orchestrator | 2026-03-08 00:32:39.752237 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-08 00:32:39.752248 | orchestrator | Sunday 08 March 2026 00:32:24 +0000 (0:00:00.845) 0:06:41.582 ********** 2026-03-08 00:32:39.752259 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-08 00:32:39.752270 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-08 00:32:39.752297 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-08 00:32:39.752309 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-08 00:32:39.752320 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-08 00:32:39.752330 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-08 00:32:39.752341 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-08 00:32:39.752352 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-08 00:32:39.752363 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-08 00:32:39.752374 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-08 00:32:39.752384 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-08 00:32:39.752395 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-08 00:32:39.752406 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-08 00:32:39.752424 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-08 00:32:39.752435 | orchestrator | 2026-03-08 00:32:39.752446 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-08 00:32:39.752456 | orchestrator | Sunday 08 March 2026 00:32:26 +0000 (0:00:02.418) 0:06:44.000 ********** 2026-03-08 00:32:39.752467 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:32:39.752478 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:32:39.752488 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:32:39.752499 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:32:39.752510 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:32:39.752520 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:32:39.752531 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:32:39.752541 | orchestrator | 2026-03-08 00:32:39.752552 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-08 00:32:39.752563 | orchestrator | Sunday 08 March 2026 00:32:27 +0000 (0:00:00.632) 0:06:44.633 ********** 2026-03-08 00:32:39.752575 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:32:39.752589 | orchestrator | 2026-03-08 00:32:39.752609 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-08 00:32:39.752628 | orchestrator | Sunday 08 March 2026 00:32:28 +0000 (0:00:00.771) 0:06:45.404 ********** 2026-03-08 00:32:39.752647 | orchestrator | ok: [testbed-manager] 2026-03-08 00:32:39.752665 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:32:39.752683 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:32:39.752700 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:32:39.752719 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:32:39.752737 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:32:39.752757 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:32:39.752775 | orchestrator | 2026-03-08 00:32:39.752795 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-08 00:32:39.752824 | orchestrator | Sunday 08 March 2026 00:32:28 +0000 (0:00:00.806) 0:06:46.210 ********** 2026-03-08 00:32:39.752843 | orchestrator | ok: [testbed-manager] 2026-03-08 00:32:39.752862 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:32:39.752874 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:32:39.752884 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:32:39.752895 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:32:39.752906 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:32:39.752916 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:32:39.752984 | orchestrator | 2026-03-08 00:32:39.752995 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-08 00:32:39.753006 | orchestrator | Sunday 08 March 2026 00:32:29 +0000 (0:00:01.010) 0:06:47.221 ********** 2026-03-08 00:32:39.753017 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:32:39.753028 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:32:39.753038 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:32:39.753049 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:32:39.753060 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:32:39.753070 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:32:39.753081 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:32:39.753092 | orchestrator | 2026-03-08 00:32:39.753103 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-08 00:32:39.753114 | orchestrator | Sunday 08 March 2026 00:32:30 +0000 (0:00:00.498) 0:06:47.720 ********** 2026-03-08 00:32:39.753124 | orchestrator | ok: [testbed-manager] 2026-03-08 00:32:39.753135 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:32:39.753146 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:32:39.753156 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:32:39.753167 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:32:39.753177 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:32:39.753188 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:32:39.753209 | orchestrator | 2026-03-08 00:32:39.753220 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-08 00:32:39.753230 | orchestrator | Sunday 08 March 2026 00:32:31 +0000 (0:00:01.599) 0:06:49.320 ********** 2026-03-08 00:32:39.753241 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:32:39.753252 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:32:39.753263 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:32:39.753273 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:32:39.753284 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:32:39.753295 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:32:39.753305 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:32:39.753316 | orchestrator | 2026-03-08 00:32:39.753327 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-08 00:32:39.753338 | orchestrator | Sunday 08 March 2026 00:32:32 +0000 (0:00:00.486) 0:06:49.806 ********** 2026-03-08 00:32:39.753349 | orchestrator | ok: [testbed-manager] 2026-03-08 00:32:39.753359 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:32:39.753370 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:32:39.753381 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:32:39.753391 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:32:39.753402 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:32:39.753413 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:32:39.753423 | orchestrator | 2026-03-08 00:32:39.753444 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-08 00:33:11.605022 | orchestrator | Sunday 08 March 2026 00:32:39 +0000 (0:00:07.313) 0:06:57.120 ********** 2026-03-08 00:33:11.605134 | orchestrator | ok: [testbed-manager] 2026-03-08 00:33:11.605152 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:33:11.605167 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:33:11.605179 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:33:11.605191 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:33:11.605203 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:33:11.605215 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:33:11.605227 | orchestrator | 2026-03-08 00:33:11.605240 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-08 00:33:11.605253 | orchestrator | Sunday 08 March 2026 00:32:41 +0000 (0:00:01.544) 0:06:58.664 ********** 2026-03-08 00:33:11.605265 | orchestrator | ok: [testbed-manager] 2026-03-08 00:33:11.605277 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:33:11.605289 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:33:11.605301 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:33:11.605313 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:33:11.605325 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:33:11.605336 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:33:11.605348 | orchestrator | 2026-03-08 00:33:11.605361 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-08 00:33:11.605373 | orchestrator | Sunday 08 March 2026 00:32:42 +0000 (0:00:01.690) 0:07:00.355 ********** 2026-03-08 00:33:11.605385 | orchestrator | ok: [testbed-manager] 2026-03-08 00:33:11.605397 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:33:11.605409 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:33:11.605421 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:33:11.605432 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:33:11.605444 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:33:11.605456 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:33:11.605468 | orchestrator | 2026-03-08 00:33:11.605483 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-08 00:33:11.605497 | orchestrator | Sunday 08 March 2026 00:32:44 +0000 (0:00:01.666) 0:07:02.022 ********** 2026-03-08 00:33:11.605511 | orchestrator | ok: [testbed-manager] 2026-03-08 00:33:11.605525 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:33:11.605539 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:33:11.605552 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:33:11.605566 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:33:11.605606 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:33:11.605621 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:33:11.605635 | orchestrator | 2026-03-08 00:33:11.605648 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-08 00:33:11.605662 | orchestrator | Sunday 08 March 2026 00:32:45 +0000 (0:00:00.868) 0:07:02.890 ********** 2026-03-08 00:33:11.605675 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:33:11.605689 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:33:11.605703 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:33:11.605717 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:33:11.605730 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:33:11.605744 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:33:11.605757 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:33:11.605770 | orchestrator | 2026-03-08 00:33:11.605784 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-08 00:33:11.605798 | orchestrator | Sunday 08 March 2026 00:32:46 +0000 (0:00:00.921) 0:07:03.812 ********** 2026-03-08 00:33:11.605812 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:33:11.605826 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:33:11.605837 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:33:11.605849 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:33:11.605861 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:33:11.605895 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:33:11.605907 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:33:11.605919 | orchestrator | 2026-03-08 00:33:11.605931 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-08 00:33:11.605960 | orchestrator | Sunday 08 March 2026 00:32:46 +0000 (0:00:00.498) 0:07:04.311 ********** 2026-03-08 00:33:11.605973 | orchestrator | ok: [testbed-manager] 2026-03-08 00:33:11.605984 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:33:11.605996 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:33:11.606007 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:33:11.606071 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:33:11.606085 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:33:11.606096 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:33:11.606107 | orchestrator | 2026-03-08 00:33:11.606118 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-08 00:33:11.606129 | orchestrator | Sunday 08 March 2026 00:32:47 +0000 (0:00:00.493) 0:07:04.804 ********** 2026-03-08 00:33:11.606140 | orchestrator | ok: [testbed-manager] 2026-03-08 00:33:11.606151 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:33:11.606161 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:33:11.606172 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:33:11.606183 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:33:11.606194 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:33:11.606205 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:33:11.606216 | orchestrator | 2026-03-08 00:33:11.606227 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-08 00:33:11.606238 | orchestrator | Sunday 08 March 2026 00:32:47 +0000 (0:00:00.506) 0:07:05.311 ********** 2026-03-08 00:33:11.606249 | orchestrator | ok: [testbed-manager] 2026-03-08 00:33:11.606259 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:33:11.606270 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:33:11.606281 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:33:11.606291 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:33:11.606302 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:33:11.606313 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:33:11.606323 | orchestrator | 2026-03-08 00:33:11.606334 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-08 00:33:11.606345 | orchestrator | Sunday 08 March 2026 00:32:48 +0000 (0:00:00.684) 0:07:05.996 ********** 2026-03-08 00:33:11.606356 | orchestrator | ok: [testbed-manager] 2026-03-08 00:33:11.606367 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:33:11.606377 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:33:11.606388 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:33:11.606409 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:33:11.606419 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:33:11.606430 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:33:11.606441 | orchestrator | 2026-03-08 00:33:11.606452 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-08 00:33:11.606481 | orchestrator | Sunday 08 March 2026 00:32:54 +0000 (0:00:05.485) 0:07:11.481 ********** 2026-03-08 00:33:11.606493 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:33:11.606504 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:33:11.606515 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:33:11.606526 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:33:11.606537 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:33:11.606548 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:33:11.606559 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:33:11.606570 | orchestrator | 2026-03-08 00:33:11.606581 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-08 00:33:11.606592 | orchestrator | Sunday 08 March 2026 00:32:54 +0000 (0:00:00.519) 0:07:12.000 ********** 2026-03-08 00:33:11.606604 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:33:11.606618 | orchestrator | 2026-03-08 00:33:11.606629 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-08 00:33:11.606640 | orchestrator | Sunday 08 March 2026 00:32:55 +0000 (0:00:00.945) 0:07:12.946 ********** 2026-03-08 00:33:11.606651 | orchestrator | ok: [testbed-manager] 2026-03-08 00:33:11.606662 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:33:11.606673 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:33:11.606684 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:33:11.606695 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:33:11.606706 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:33:11.606717 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:33:11.606727 | orchestrator | 2026-03-08 00:33:11.606738 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-08 00:33:11.606749 | orchestrator | Sunday 08 March 2026 00:32:57 +0000 (0:00:01.801) 0:07:14.747 ********** 2026-03-08 00:33:11.606760 | orchestrator | ok: [testbed-manager] 2026-03-08 00:33:11.606771 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:33:11.606781 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:33:11.606792 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:33:11.606803 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:33:11.606814 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:33:11.606825 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:33:11.606836 | orchestrator | 2026-03-08 00:33:11.606847 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-08 00:33:11.606857 | orchestrator | Sunday 08 March 2026 00:32:58 +0000 (0:00:01.180) 0:07:15.927 ********** 2026-03-08 00:33:11.606898 | orchestrator | ok: [testbed-manager] 2026-03-08 00:33:11.606912 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:33:11.606922 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:33:11.606933 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:33:11.606944 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:33:11.606954 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:33:11.606965 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:33:11.606976 | orchestrator | 2026-03-08 00:33:11.606987 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-08 00:33:11.606998 | orchestrator | Sunday 08 March 2026 00:32:59 +0000 (0:00:00.837) 0:07:16.765 ********** 2026-03-08 00:33:11.607015 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-08 00:33:11.607028 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-08 00:33:11.607046 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-08 00:33:11.607057 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-08 00:33:11.607068 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-08 00:33:11.607079 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-08 00:33:11.607090 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-08 00:33:11.607100 | orchestrator | 2026-03-08 00:33:11.607112 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-08 00:33:11.607123 | orchestrator | Sunday 08 March 2026 00:33:01 +0000 (0:00:01.854) 0:07:18.620 ********** 2026-03-08 00:33:11.607134 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:33:11.607145 | orchestrator | 2026-03-08 00:33:11.607157 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-08 00:33:11.607167 | orchestrator | Sunday 08 March 2026 00:33:02 +0000 (0:00:00.777) 0:07:19.398 ********** 2026-03-08 00:33:11.607178 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:33:11.607190 | orchestrator | changed: [testbed-manager] 2026-03-08 00:33:11.607201 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:33:11.607211 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:33:11.607222 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:33:11.607233 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:33:11.607244 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:33:11.607254 | orchestrator | 2026-03-08 00:33:11.607265 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-08 00:33:11.607284 | orchestrator | Sunday 08 March 2026 00:33:11 +0000 (0:00:09.578) 0:07:28.976 ********** 2026-03-08 00:33:42.722618 | orchestrator | ok: [testbed-manager] 2026-03-08 00:33:42.722726 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:33:42.722741 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:33:42.722754 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:33:42.722765 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:33:42.722776 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:33:42.722786 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:33:42.722797 | orchestrator | 2026-03-08 00:33:42.722809 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-08 00:33:42.722867 | orchestrator | Sunday 08 March 2026 00:33:13 +0000 (0:00:02.010) 0:07:30.986 ********** 2026-03-08 00:33:42.722880 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:33:42.722891 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:33:42.722902 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:33:42.722912 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:33:42.722923 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:33:42.722934 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:33:42.722944 | orchestrator | 2026-03-08 00:33:42.722955 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-08 00:33:42.722966 | orchestrator | Sunday 08 March 2026 00:33:14 +0000 (0:00:01.283) 0:07:32.270 ********** 2026-03-08 00:33:42.722977 | orchestrator | changed: [testbed-manager] 2026-03-08 00:33:42.722989 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:33:42.723000 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:33:42.723010 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:33:42.723021 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:33:42.723032 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:33:42.723042 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:33:42.723081 | orchestrator | 2026-03-08 00:33:42.723093 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-08 00:33:42.723103 | orchestrator | 2026-03-08 00:33:42.723116 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-08 00:33:42.723129 | orchestrator | Sunday 08 March 2026 00:33:16 +0000 (0:00:01.323) 0:07:33.594 ********** 2026-03-08 00:33:42.723142 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:33:42.723155 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:33:42.723168 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:33:42.723180 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:33:42.723192 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:33:42.723205 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:33:42.723218 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:33:42.723230 | orchestrator | 2026-03-08 00:33:42.723243 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-08 00:33:42.723256 | orchestrator | 2026-03-08 00:33:42.723269 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-08 00:33:42.723282 | orchestrator | Sunday 08 March 2026 00:33:16 +0000 (0:00:00.786) 0:07:34.380 ********** 2026-03-08 00:33:42.723295 | orchestrator | changed: [testbed-manager] 2026-03-08 00:33:42.723307 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:33:42.723319 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:33:42.723331 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:33:42.723344 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:33:42.723356 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:33:42.723369 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:33:42.723383 | orchestrator | 2026-03-08 00:33:42.723395 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-08 00:33:42.723422 | orchestrator | Sunday 08 March 2026 00:33:18 +0000 (0:00:01.363) 0:07:35.744 ********** 2026-03-08 00:33:42.723435 | orchestrator | ok: [testbed-manager] 2026-03-08 00:33:42.723448 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:33:42.723461 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:33:42.723472 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:33:42.723482 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:33:42.723493 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:33:42.723504 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:33:42.723514 | orchestrator | 2026-03-08 00:33:42.723525 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-08 00:33:42.723536 | orchestrator | Sunday 08 March 2026 00:33:19 +0000 (0:00:01.474) 0:07:37.219 ********** 2026-03-08 00:33:42.723546 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:33:42.723557 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:33:42.723568 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:33:42.723578 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:33:42.723589 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:33:42.723599 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:33:42.723610 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:33:42.723637 | orchestrator | 2026-03-08 00:33:42.723649 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-08 00:33:42.723660 | orchestrator | Sunday 08 March 2026 00:33:20 +0000 (0:00:00.484) 0:07:37.704 ********** 2026-03-08 00:33:42.723681 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:33:42.723695 | orchestrator | 2026-03-08 00:33:42.723706 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-08 00:33:42.723716 | orchestrator | Sunday 08 March 2026 00:33:21 +0000 (0:00:00.955) 0:07:38.659 ********** 2026-03-08 00:33:42.723729 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:33:42.723751 | orchestrator | 2026-03-08 00:33:42.723762 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-08 00:33:42.723773 | orchestrator | Sunday 08 March 2026 00:33:22 +0000 (0:00:00.751) 0:07:39.410 ********** 2026-03-08 00:33:42.723784 | orchestrator | changed: [testbed-manager] 2026-03-08 00:33:42.723795 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:33:42.723805 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:33:42.723816 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:33:42.723846 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:33:42.723857 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:33:42.723867 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:33:42.723878 | orchestrator | 2026-03-08 00:33:42.723889 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-08 00:33:42.723918 | orchestrator | Sunday 08 March 2026 00:33:30 +0000 (0:00:08.554) 0:07:47.965 ********** 2026-03-08 00:33:42.723930 | orchestrator | changed: [testbed-manager] 2026-03-08 00:33:42.723940 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:33:42.723951 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:33:42.723961 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:33:42.723972 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:33:42.723982 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:33:42.723993 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:33:42.724004 | orchestrator | 2026-03-08 00:33:42.724014 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-08 00:33:42.724025 | orchestrator | Sunday 08 March 2026 00:33:31 +0000 (0:00:01.027) 0:07:48.992 ********** 2026-03-08 00:33:42.724036 | orchestrator | changed: [testbed-manager] 2026-03-08 00:33:42.724046 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:33:42.724057 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:33:42.724068 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:33:42.724078 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:33:42.724089 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:33:42.724100 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:33:42.724110 | orchestrator | 2026-03-08 00:33:42.724121 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-08 00:33:42.724132 | orchestrator | Sunday 08 March 2026 00:33:32 +0000 (0:00:01.366) 0:07:50.359 ********** 2026-03-08 00:33:42.724142 | orchestrator | changed: [testbed-manager] 2026-03-08 00:33:42.724153 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:33:42.724164 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:33:42.724174 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:33:42.724185 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:33:42.724195 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:33:42.724206 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:33:42.724217 | orchestrator | 2026-03-08 00:33:42.724227 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-08 00:33:42.724238 | orchestrator | Sunday 08 March 2026 00:33:35 +0000 (0:00:02.589) 0:07:52.949 ********** 2026-03-08 00:33:42.724249 | orchestrator | changed: [testbed-manager] 2026-03-08 00:33:42.724259 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:33:42.724270 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:33:42.724281 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:33:42.724291 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:33:42.724302 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:33:42.724313 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:33:42.724323 | orchestrator | 2026-03-08 00:33:42.724334 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-08 00:33:42.724345 | orchestrator | Sunday 08 March 2026 00:33:36 +0000 (0:00:01.214) 0:07:54.163 ********** 2026-03-08 00:33:42.724356 | orchestrator | changed: [testbed-manager] 2026-03-08 00:33:42.724366 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:33:42.724377 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:33:42.724387 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:33:42.724405 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:33:42.724416 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:33:42.724427 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:33:42.724437 | orchestrator | 2026-03-08 00:33:42.724448 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-08 00:33:42.724459 | orchestrator | 2026-03-08 00:33:42.724475 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-08 00:33:42.724486 | orchestrator | Sunday 08 March 2026 00:33:37 +0000 (0:00:01.097) 0:07:55.261 ********** 2026-03-08 00:33:42.724497 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:33:42.724508 | orchestrator | 2026-03-08 00:33:42.724519 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-08 00:33:42.724529 | orchestrator | Sunday 08 March 2026 00:33:38 +0000 (0:00:00.762) 0:07:56.023 ********** 2026-03-08 00:33:42.724540 | orchestrator | ok: [testbed-manager] 2026-03-08 00:33:42.724551 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:33:42.724562 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:33:42.724572 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:33:42.724583 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:33:42.724594 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:33:42.724604 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:33:42.724615 | orchestrator | 2026-03-08 00:33:42.724626 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-08 00:33:42.724636 | orchestrator | Sunday 08 March 2026 00:33:39 +0000 (0:00:01.024) 0:07:57.047 ********** 2026-03-08 00:33:42.724647 | orchestrator | changed: [testbed-manager] 2026-03-08 00:33:42.724658 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:33:42.724669 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:33:42.724679 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:33:42.724690 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:33:42.724701 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:33:42.724711 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:33:42.724722 | orchestrator | 2026-03-08 00:33:42.724733 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-08 00:33:42.724743 | orchestrator | Sunday 08 March 2026 00:33:40 +0000 (0:00:01.244) 0:07:58.292 ********** 2026-03-08 00:33:42.724754 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:33:42.724765 | orchestrator | 2026-03-08 00:33:42.724776 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-08 00:33:42.724787 | orchestrator | Sunday 08 March 2026 00:33:41 +0000 (0:00:00.959) 0:07:59.251 ********** 2026-03-08 00:33:42.724797 | orchestrator | ok: [testbed-manager] 2026-03-08 00:33:42.724808 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:33:42.724819 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:33:42.724849 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:33:42.724860 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:33:42.724870 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:33:42.724881 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:33:42.724892 | orchestrator | 2026-03-08 00:33:42.724902 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-08 00:33:42.724919 | orchestrator | Sunday 08 March 2026 00:33:42 +0000 (0:00:00.846) 0:08:00.098 ********** 2026-03-08 00:33:44.242936 | orchestrator | changed: [testbed-manager] 2026-03-08 00:33:44.243078 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:33:44.243105 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:33:44.243125 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:33:44.243145 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:33:44.243164 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:33:44.243185 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:33:44.243205 | orchestrator | 2026-03-08 00:33:44.243226 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:33:44.243284 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-08 00:33:44.243298 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-08 00:33:44.243309 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-08 00:33:44.243320 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-08 00:33:44.243331 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-08 00:33:44.243342 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-08 00:33:44.243352 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-08 00:33:44.243363 | orchestrator | 2026-03-08 00:33:44.243374 | orchestrator | 2026-03-08 00:33:44.243385 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:33:44.243396 | orchestrator | Sunday 08 March 2026 00:33:43 +0000 (0:00:01.095) 0:08:01.193 ********** 2026-03-08 00:33:44.243409 | orchestrator | =============================================================================== 2026-03-08 00:33:44.243422 | orchestrator | osism.commons.packages : Install required packages --------------------- 83.57s 2026-03-08 00:33:44.243435 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.19s 2026-03-08 00:33:44.243447 | orchestrator | osism.commons.packages : Download required packages -------------------- 30.48s 2026-03-08 00:33:44.243460 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.19s 2026-03-08 00:33:44.243489 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.17s 2026-03-08 00:33:44.243502 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.51s 2026-03-08 00:33:44.243516 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.80s 2026-03-08 00:33:44.243529 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.63s 2026-03-08 00:33:44.243541 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.58s 2026-03-08 00:33:44.243554 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.55s 2026-03-08 00:33:44.243567 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.36s 2026-03-08 00:33:44.243579 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.09s 2026-03-08 00:33:44.243591 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.94s 2026-03-08 00:33:44.243603 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.91s 2026-03-08 00:33:44.243616 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.83s 2026-03-08 00:33:44.243628 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.31s 2026-03-08 00:33:44.243640 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.69s 2026-03-08 00:33:44.243653 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.56s 2026-03-08 00:33:44.243666 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.58s 2026-03-08 00:33:44.243676 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.56s 2026-03-08 00:33:44.521325 | orchestrator | + osism apply fail2ban 2026-03-08 00:33:56.982855 | orchestrator | 2026-03-08 00:33:56 | INFO  | Task 36555a44-7f57-42dc-82a8-df2e245ec4cd (fail2ban) was prepared for execution. 2026-03-08 00:33:56.982999 | orchestrator | 2026-03-08 00:33:56 | INFO  | It takes a moment until task 36555a44-7f57-42dc-82a8-df2e245ec4cd (fail2ban) has been started and output is visible here. 2026-03-08 00:34:17.799203 | orchestrator | 2026-03-08 00:34:17.799324 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-08 00:34:17.799343 | orchestrator | 2026-03-08 00:34:17.799357 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-08 00:34:17.799370 | orchestrator | Sunday 08 March 2026 00:34:01 +0000 (0:00:00.249) 0:00:00.249 ********** 2026-03-08 00:34:17.799383 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:34:17.799398 | orchestrator | 2026-03-08 00:34:17.799410 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-08 00:34:17.799443 | orchestrator | Sunday 08 March 2026 00:34:02 +0000 (0:00:01.134) 0:00:01.384 ********** 2026-03-08 00:34:17.799466 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:34:17.799478 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:34:17.799489 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:34:17.799500 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:34:17.799511 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:34:17.799522 | orchestrator | changed: [testbed-manager] 2026-03-08 00:34:17.799533 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:34:17.799543 | orchestrator | 2026-03-08 00:34:17.799555 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-08 00:34:17.799567 | orchestrator | Sunday 08 March 2026 00:34:13 +0000 (0:00:10.711) 0:00:12.095 ********** 2026-03-08 00:34:17.799577 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:34:17.799588 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:34:17.799599 | orchestrator | changed: [testbed-manager] 2026-03-08 00:34:17.799610 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:34:17.799620 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:34:17.799631 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:34:17.799642 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:34:17.799652 | orchestrator | 2026-03-08 00:34:17.799663 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-08 00:34:17.799674 | orchestrator | Sunday 08 March 2026 00:34:14 +0000 (0:00:01.422) 0:00:13.518 ********** 2026-03-08 00:34:17.799685 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:34:17.799697 | orchestrator | ok: [testbed-manager] 2026-03-08 00:34:17.799708 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:34:17.799718 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:34:17.799729 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:34:17.799741 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:34:17.799753 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:34:17.799765 | orchestrator | 2026-03-08 00:34:17.799816 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-08 00:34:17.799837 | orchestrator | Sunday 08 March 2026 00:34:15 +0000 (0:00:01.453) 0:00:14.972 ********** 2026-03-08 00:34:17.799858 | orchestrator | changed: [testbed-manager] 2026-03-08 00:34:17.799878 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:34:17.799891 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:34:17.799902 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:34:17.799912 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:34:17.799924 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:34:17.799934 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:34:17.799945 | orchestrator | 2026-03-08 00:34:17.799956 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:34:17.799967 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:34:17.800005 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:34:17.800017 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:34:17.800028 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:34:17.800039 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:34:17.800050 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:34:17.800061 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:34:17.800072 | orchestrator | 2026-03-08 00:34:17.800083 | orchestrator | 2026-03-08 00:34:17.800093 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:34:17.800104 | orchestrator | Sunday 08 March 2026 00:34:17 +0000 (0:00:01.548) 0:00:16.520 ********** 2026-03-08 00:34:17.800115 | orchestrator | =============================================================================== 2026-03-08 00:34:17.800126 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 10.71s 2026-03-08 00:34:17.800137 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.55s 2026-03-08 00:34:17.800147 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.45s 2026-03-08 00:34:17.800158 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.42s 2026-03-08 00:34:17.800169 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.13s 2026-03-08 00:34:18.086099 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-08 00:34:18.086196 | orchestrator | + osism apply network 2026-03-08 00:34:30.059687 | orchestrator | 2026-03-08 00:34:30 | INFO  | Task e480d2be-b741-403b-bb59-4dbd05821a02 (network) was prepared for execution. 2026-03-08 00:34:30.059813 | orchestrator | 2026-03-08 00:34:30 | INFO  | It takes a moment until task e480d2be-b741-403b-bb59-4dbd05821a02 (network) has been started and output is visible here. 2026-03-08 00:34:56.018938 | orchestrator | 2026-03-08 00:34:56.019040 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-08 00:34:56.019052 | orchestrator | 2026-03-08 00:34:56.019060 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-08 00:34:56.019069 | orchestrator | Sunday 08 March 2026 00:34:33 +0000 (0:00:00.239) 0:00:00.239 ********** 2026-03-08 00:34:56.019077 | orchestrator | ok: [testbed-manager] 2026-03-08 00:34:56.019085 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:34:56.019093 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:34:56.019100 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:34:56.019108 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:34:56.019115 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:34:56.019122 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:34:56.019130 | orchestrator | 2026-03-08 00:34:56.019137 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-08 00:34:56.019144 | orchestrator | Sunday 08 March 2026 00:34:34 +0000 (0:00:00.660) 0:00:00.900 ********** 2026-03-08 00:34:56.019154 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:34:56.019163 | orchestrator | 2026-03-08 00:34:56.019171 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-08 00:34:56.019178 | orchestrator | Sunday 08 March 2026 00:34:35 +0000 (0:00:01.118) 0:00:02.019 ********** 2026-03-08 00:34:56.019206 | orchestrator | ok: [testbed-manager] 2026-03-08 00:34:56.019214 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:34:56.019221 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:34:56.019228 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:34:56.019235 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:34:56.019242 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:34:56.019250 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:34:56.019257 | orchestrator | 2026-03-08 00:34:56.019264 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-08 00:34:56.019271 | orchestrator | Sunday 08 March 2026 00:34:37 +0000 (0:00:01.613) 0:00:03.632 ********** 2026-03-08 00:34:56.019278 | orchestrator | ok: [testbed-manager] 2026-03-08 00:34:56.019285 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:34:56.019293 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:34:56.019300 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:34:56.019307 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:34:56.019314 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:34:56.019322 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:34:56.019329 | orchestrator | 2026-03-08 00:34:56.019336 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-08 00:34:56.019343 | orchestrator | Sunday 08 March 2026 00:34:38 +0000 (0:00:01.449) 0:00:05.082 ********** 2026-03-08 00:34:56.019350 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-08 00:34:56.019358 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-08 00:34:56.019365 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-08 00:34:56.019373 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-08 00:34:56.019380 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-08 00:34:56.019401 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-08 00:34:56.019409 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-08 00:34:56.019416 | orchestrator | 2026-03-08 00:34:56.019423 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-08 00:34:56.019434 | orchestrator | Sunday 08 March 2026 00:34:39 +0000 (0:00:00.846) 0:00:05.929 ********** 2026-03-08 00:34:56.019442 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-08 00:34:56.019450 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 00:34:56.019457 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-08 00:34:56.019464 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-08 00:34:56.019471 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-08 00:34:56.019479 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-08 00:34:56.019488 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-08 00:34:56.019496 | orchestrator | 2026-03-08 00:34:56.019505 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-08 00:34:56.019513 | orchestrator | Sunday 08 March 2026 00:34:42 +0000 (0:00:02.906) 0:00:08.835 ********** 2026-03-08 00:34:56.019522 | orchestrator | changed: [testbed-manager] 2026-03-08 00:34:56.019530 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:34:56.019539 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:34:56.019547 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:34:56.019556 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:34:56.019564 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:34:56.019572 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:34:56.019581 | orchestrator | 2026-03-08 00:34:56.019589 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-08 00:34:56.019598 | orchestrator | Sunday 08 March 2026 00:34:44 +0000 (0:00:01.534) 0:00:10.369 ********** 2026-03-08 00:34:56.019607 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-08 00:34:56.019615 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-08 00:34:56.019624 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 00:34:56.019632 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-08 00:34:56.019641 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-08 00:34:56.019650 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-08 00:34:56.019664 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-08 00:34:56.019673 | orchestrator | 2026-03-08 00:34:56.019682 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-08 00:34:56.019690 | orchestrator | Sunday 08 March 2026 00:34:45 +0000 (0:00:01.911) 0:00:12.281 ********** 2026-03-08 00:34:56.019699 | orchestrator | ok: [testbed-manager] 2026-03-08 00:34:56.019732 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:34:56.019757 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:34:56.019771 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:34:56.019784 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:34:56.019798 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:34:56.019808 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:34:56.019816 | orchestrator | 2026-03-08 00:34:56.019825 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-08 00:34:56.019849 | orchestrator | Sunday 08 March 2026 00:34:47 +0000 (0:00:01.086) 0:00:13.367 ********** 2026-03-08 00:34:56.019858 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:34:56.019866 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:34:56.019873 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:34:56.019880 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:34:56.019888 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:34:56.019895 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:34:56.019902 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:34:56.019909 | orchestrator | 2026-03-08 00:34:56.019916 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-08 00:34:56.019923 | orchestrator | Sunday 08 March 2026 00:34:47 +0000 (0:00:00.633) 0:00:14.001 ********** 2026-03-08 00:34:56.019931 | orchestrator | ok: [testbed-manager] 2026-03-08 00:34:56.019938 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:34:56.019945 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:34:56.019952 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:34:56.019959 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:34:56.019966 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:34:56.019973 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:34:56.019980 | orchestrator | 2026-03-08 00:34:56.019987 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-08 00:34:56.019995 | orchestrator | Sunday 08 March 2026 00:34:49 +0000 (0:00:02.115) 0:00:16.116 ********** 2026-03-08 00:34:56.020002 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:34:56.020009 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:34:56.020016 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:34:56.020023 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:34:56.020030 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:34:56.020037 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:34:56.020045 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-08 00:34:56.020054 | orchestrator | 2026-03-08 00:34:56.020061 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-08 00:34:56.020068 | orchestrator | Sunday 08 March 2026 00:34:50 +0000 (0:00:00.864) 0:00:16.981 ********** 2026-03-08 00:34:56.020076 | orchestrator | ok: [testbed-manager] 2026-03-08 00:34:56.020083 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:34:56.020090 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:34:56.020097 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:34:56.020104 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:34:56.020111 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:34:56.020118 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:34:56.020126 | orchestrator | 2026-03-08 00:34:56.020133 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-08 00:34:56.020140 | orchestrator | Sunday 08 March 2026 00:34:52 +0000 (0:00:01.602) 0:00:18.583 ********** 2026-03-08 00:34:56.020147 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:34:56.020163 | orchestrator | 2026-03-08 00:34:56.020170 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-08 00:34:56.020177 | orchestrator | Sunday 08 March 2026 00:34:53 +0000 (0:00:01.173) 0:00:19.756 ********** 2026-03-08 00:34:56.020184 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:34:56.020191 | orchestrator | ok: [testbed-manager] 2026-03-08 00:34:56.020200 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:34:56.020208 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:34:56.020222 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:34:56.020231 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:34:56.020239 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:34:56.020248 | orchestrator | 2026-03-08 00:34:56.020257 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-08 00:34:56.020265 | orchestrator | Sunday 08 March 2026 00:34:54 +0000 (0:00:00.897) 0:00:20.654 ********** 2026-03-08 00:34:56.020274 | orchestrator | ok: [testbed-manager] 2026-03-08 00:34:56.020282 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:34:56.020291 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:34:56.020299 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:34:56.020308 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:34:56.020316 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:34:56.020325 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:34:56.020333 | orchestrator | 2026-03-08 00:34:56.020342 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-08 00:34:56.020350 | orchestrator | Sunday 08 March 2026 00:34:54 +0000 (0:00:00.657) 0:00:21.311 ********** 2026-03-08 00:34:56.020359 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-08 00:34:56.020368 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-08 00:34:56.020376 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-08 00:34:56.020385 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-08 00:34:56.020393 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-08 00:34:56.020402 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-08 00:34:56.020410 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-08 00:34:56.020419 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-08 00:34:56.020428 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-08 00:34:56.020436 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-08 00:34:56.020445 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-08 00:34:56.020453 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-08 00:34:56.020462 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-08 00:34:56.020470 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-08 00:34:56.020479 | orchestrator | 2026-03-08 00:34:56.020493 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-08 00:35:11.329753 | orchestrator | Sunday 08 March 2026 00:34:56 +0000 (0:00:01.048) 0:00:22.360 ********** 2026-03-08 00:35:11.329865 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:35:11.329884 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:35:11.329896 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:35:11.329907 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:35:11.329918 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:35:11.329929 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:35:11.329940 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:35:11.329951 | orchestrator | 2026-03-08 00:35:11.329963 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-08 00:35:11.329974 | orchestrator | Sunday 08 March 2026 00:34:56 +0000 (0:00:00.575) 0:00:22.935 ********** 2026-03-08 00:35:11.330010 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-node-1, testbed-manager, testbed-node-3, testbed-node-2, testbed-node-4, testbed-node-5 2026-03-08 00:35:11.330065 | orchestrator | 2026-03-08 00:35:11.330077 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-08 00:35:11.330088 | orchestrator | Sunday 08 March 2026 00:35:00 +0000 (0:00:03.984) 0:00:26.919 ********** 2026-03-08 00:35:11.330101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-08 00:35:11.330113 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-08 00:35:11.330125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-08 00:35:11.330137 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-08 00:35:11.330148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-08 00:35:11.330170 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-08 00:35:11.330188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-08 00:35:11.330200 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-08 00:35:11.330211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-08 00:35:11.330222 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-08 00:35:11.330233 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-08 00:35:11.330263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-08 00:35:11.330287 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-08 00:35:11.330300 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-08 00:35:11.330313 | orchestrator | 2026-03-08 00:35:11.330326 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-08 00:35:11.330340 | orchestrator | Sunday 08 March 2026 00:35:05 +0000 (0:00:05.425) 0:00:32.345 ********** 2026-03-08 00:35:11.330355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-08 00:35:11.330375 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-08 00:35:11.330394 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-08 00:35:11.330412 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-08 00:35:11.330424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-08 00:35:11.330444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-08 00:35:11.330457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-08 00:35:11.330469 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-08 00:35:11.330480 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-08 00:35:11.330491 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-08 00:35:11.330502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-08 00:35:11.330519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-08 00:35:11.330539 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-08 00:35:16.914497 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-08 00:35:16.914579 | orchestrator | 2026-03-08 00:35:16.914586 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-08 00:35:16.914593 | orchestrator | Sunday 08 March 2026 00:35:11 +0000 (0:00:05.323) 0:00:37.668 ********** 2026-03-08 00:35:16.914600 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:35:16.914605 | orchestrator | 2026-03-08 00:35:16.914610 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-08 00:35:16.914615 | orchestrator | Sunday 08 March 2026 00:35:12 +0000 (0:00:01.119) 0:00:38.787 ********** 2026-03-08 00:35:16.914620 | orchestrator | ok: [testbed-manager] 2026-03-08 00:35:16.914626 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:35:16.914630 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:35:16.914635 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:35:16.914639 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:35:16.914644 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:35:16.914648 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:35:16.914653 | orchestrator | 2026-03-08 00:35:16.914658 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-08 00:35:16.914662 | orchestrator | Sunday 08 March 2026 00:35:13 +0000 (0:00:00.986) 0:00:39.774 ********** 2026-03-08 00:35:16.914667 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-08 00:35:16.914672 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-08 00:35:16.914710 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-08 00:35:16.914715 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-08 00:35:16.914720 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-08 00:35:16.914724 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-08 00:35:16.914729 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-08 00:35:16.914734 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-08 00:35:16.914739 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:35:16.914745 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-08 00:35:16.914762 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-08 00:35:16.914768 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-08 00:35:16.914773 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-08 00:35:16.914778 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:35:16.914783 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-08 00:35:16.914809 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-08 00:35:16.914818 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-08 00:35:16.914827 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-08 00:35:16.914835 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:35:16.914844 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-08 00:35:16.914852 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-08 00:35:16.914858 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-08 00:35:16.914863 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-08 00:35:16.914868 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:35:16.914886 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-08 00:35:16.914895 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-08 00:35:16.914911 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-08 00:35:16.914920 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-08 00:35:16.914928 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:35:16.914937 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:35:16.914944 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-08 00:35:16.914949 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-08 00:35:16.914954 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-08 00:35:16.914959 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-08 00:35:16.914964 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:35:16.914969 | orchestrator | 2026-03-08 00:35:16.914975 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-08 00:35:16.914992 | orchestrator | Sunday 08 March 2026 00:35:15 +0000 (0:00:01.857) 0:00:41.632 ********** 2026-03-08 00:35:16.914997 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:35:16.915002 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:35:16.915007 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:35:16.915012 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:35:16.915018 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:35:16.915023 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:35:16.915028 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:35:16.915033 | orchestrator | 2026-03-08 00:35:16.915038 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-08 00:35:16.915044 | orchestrator | Sunday 08 March 2026 00:35:15 +0000 (0:00:00.636) 0:00:42.268 ********** 2026-03-08 00:35:16.915050 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:35:16.915056 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:35:16.915061 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:35:16.915067 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:35:16.915074 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:35:16.915080 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:35:16.915086 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:35:16.915092 | orchestrator | 2026-03-08 00:35:16.915098 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:35:16.915105 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-08 00:35:16.915112 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-08 00:35:16.915118 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-08 00:35:16.915129 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-08 00:35:16.915135 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-08 00:35:16.915141 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-08 00:35:16.915147 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-08 00:35:16.915153 | orchestrator | 2026-03-08 00:35:16.915159 | orchestrator | 2026-03-08 00:35:16.915165 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:35:16.915171 | orchestrator | Sunday 08 March 2026 00:35:16 +0000 (0:00:00.662) 0:00:42.930 ********** 2026-03-08 00:35:16.915180 | orchestrator | =============================================================================== 2026-03-08 00:35:16.915186 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.43s 2026-03-08 00:35:16.915192 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.32s 2026-03-08 00:35:16.915198 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.98s 2026-03-08 00:35:16.915204 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 2.91s 2026-03-08 00:35:16.915210 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.12s 2026-03-08 00:35:16.915216 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.91s 2026-03-08 00:35:16.915222 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.86s 2026-03-08 00:35:16.915228 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.61s 2026-03-08 00:35:16.915234 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.60s 2026-03-08 00:35:16.915239 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.53s 2026-03-08 00:35:16.915245 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.45s 2026-03-08 00:35:16.915251 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.17s 2026-03-08 00:35:16.915257 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.12s 2026-03-08 00:35:16.915266 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.12s 2026-03-08 00:35:16.915275 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.09s 2026-03-08 00:35:16.915283 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.05s 2026-03-08 00:35:16.915292 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.99s 2026-03-08 00:35:16.915301 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.90s 2026-03-08 00:35:16.915311 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.86s 2026-03-08 00:35:16.915319 | orchestrator | osism.commons.network : Create required directories --------------------- 0.85s 2026-03-08 00:35:17.190953 | orchestrator | + osism apply wireguard 2026-03-08 00:35:29.203375 | orchestrator | 2026-03-08 00:35:29 | INFO  | Task 20e76151-1e62-4c0f-af9c-e2a25fce3506 (wireguard) was prepared for execution. 2026-03-08 00:35:29.203494 | orchestrator | 2026-03-08 00:35:29 | INFO  | It takes a moment until task 20e76151-1e62-4c0f-af9c-e2a25fce3506 (wireguard) has been started and output is visible here. 2026-03-08 00:35:46.767225 | orchestrator | 2026-03-08 00:35:46.767340 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-08 00:35:46.767358 | orchestrator | 2026-03-08 00:35:46.767395 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-08 00:35:46.767407 | orchestrator | Sunday 08 March 2026 00:35:33 +0000 (0:00:00.196) 0:00:00.196 ********** 2026-03-08 00:35:46.767418 | orchestrator | ok: [testbed-manager] 2026-03-08 00:35:46.767430 | orchestrator | 2026-03-08 00:35:46.767442 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-08 00:35:46.767452 | orchestrator | Sunday 08 March 2026 00:35:34 +0000 (0:00:01.181) 0:00:01.378 ********** 2026-03-08 00:35:46.767464 | orchestrator | changed: [testbed-manager] 2026-03-08 00:35:46.767475 | orchestrator | 2026-03-08 00:35:46.767491 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-08 00:35:46.767503 | orchestrator | Sunday 08 March 2026 00:35:39 +0000 (0:00:05.302) 0:00:06.681 ********** 2026-03-08 00:35:46.767514 | orchestrator | changed: [testbed-manager] 2026-03-08 00:35:46.767525 | orchestrator | 2026-03-08 00:35:46.767535 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-08 00:35:46.767546 | orchestrator | Sunday 08 March 2026 00:35:40 +0000 (0:00:00.512) 0:00:07.193 ********** 2026-03-08 00:35:46.767557 | orchestrator | changed: [testbed-manager] 2026-03-08 00:35:46.767568 | orchestrator | 2026-03-08 00:35:46.767579 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-08 00:35:46.767589 | orchestrator | Sunday 08 March 2026 00:35:40 +0000 (0:00:00.388) 0:00:07.582 ********** 2026-03-08 00:35:46.767600 | orchestrator | ok: [testbed-manager] 2026-03-08 00:35:46.767611 | orchestrator | 2026-03-08 00:35:46.767622 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-08 00:35:46.767656 | orchestrator | Sunday 08 March 2026 00:35:41 +0000 (0:00:00.572) 0:00:08.154 ********** 2026-03-08 00:35:46.767667 | orchestrator | ok: [testbed-manager] 2026-03-08 00:35:46.767678 | orchestrator | 2026-03-08 00:35:46.767689 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-08 00:35:46.767700 | orchestrator | Sunday 08 March 2026 00:35:41 +0000 (0:00:00.368) 0:00:08.523 ********** 2026-03-08 00:35:46.767710 | orchestrator | ok: [testbed-manager] 2026-03-08 00:35:46.767721 | orchestrator | 2026-03-08 00:35:46.767732 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-08 00:35:46.767743 | orchestrator | Sunday 08 March 2026 00:35:41 +0000 (0:00:00.399) 0:00:08.923 ********** 2026-03-08 00:35:46.767754 | orchestrator | changed: [testbed-manager] 2026-03-08 00:35:46.767766 | orchestrator | 2026-03-08 00:35:46.767779 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-08 00:35:46.767792 | orchestrator | Sunday 08 March 2026 00:35:42 +0000 (0:00:01.103) 0:00:10.026 ********** 2026-03-08 00:35:46.767805 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-08 00:35:46.767818 | orchestrator | changed: [testbed-manager] 2026-03-08 00:35:46.767830 | orchestrator | 2026-03-08 00:35:46.767842 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-08 00:35:46.767855 | orchestrator | Sunday 08 March 2026 00:35:43 +0000 (0:00:00.931) 0:00:10.957 ********** 2026-03-08 00:35:46.767868 | orchestrator | changed: [testbed-manager] 2026-03-08 00:35:46.767881 | orchestrator | 2026-03-08 00:35:46.767893 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-08 00:35:46.767904 | orchestrator | Sunday 08 March 2026 00:35:45 +0000 (0:00:01.715) 0:00:12.672 ********** 2026-03-08 00:35:46.767914 | orchestrator | changed: [testbed-manager] 2026-03-08 00:35:46.767926 | orchestrator | 2026-03-08 00:35:46.767937 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:35:46.767948 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:35:46.767960 | orchestrator | 2026-03-08 00:35:46.767971 | orchestrator | 2026-03-08 00:35:46.767982 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:35:46.767993 | orchestrator | Sunday 08 March 2026 00:35:46 +0000 (0:00:00.917) 0:00:13.590 ********** 2026-03-08 00:35:46.768012 | orchestrator | =============================================================================== 2026-03-08 00:35:46.768023 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.30s 2026-03-08 00:35:46.768034 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.72s 2026-03-08 00:35:46.768044 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.18s 2026-03-08 00:35:46.768055 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.10s 2026-03-08 00:35:46.768066 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.93s 2026-03-08 00:35:46.768077 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.92s 2026-03-08 00:35:46.768088 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.57s 2026-03-08 00:35:46.768099 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.51s 2026-03-08 00:35:46.768109 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.40s 2026-03-08 00:35:46.768120 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.39s 2026-03-08 00:35:46.768131 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.37s 2026-03-08 00:35:47.046789 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-08 00:35:47.080590 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-08 00:35:47.080702 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-08 00:35:47.152722 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 194 0 --:--:-- --:--:-- --:--:-- 197 2026-03-08 00:35:47.169544 | orchestrator | + osism apply --environment custom workarounds 2026-03-08 00:35:49.084615 | orchestrator | 2026-03-08 00:35:49 | INFO  | Trying to run play workarounds in environment custom 2026-03-08 00:35:59.252357 | orchestrator | 2026-03-08 00:35:59 | INFO  | Task 24dd8d71-c81c-4425-8da6-2c9500358950 (workarounds) was prepared for execution. 2026-03-08 00:35:59.252506 | orchestrator | 2026-03-08 00:35:59 | INFO  | It takes a moment until task 24dd8d71-c81c-4425-8da6-2c9500358950 (workarounds) has been started and output is visible here. 2026-03-08 00:36:23.508910 | orchestrator | 2026-03-08 00:36:23.509026 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 00:36:23.509047 | orchestrator | 2026-03-08 00:36:23.509062 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-08 00:36:23.509077 | orchestrator | Sunday 08 March 2026 00:36:03 +0000 (0:00:00.122) 0:00:00.122 ********** 2026-03-08 00:36:23.509092 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-08 00:36:23.509108 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-08 00:36:23.509123 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-08 00:36:23.509140 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-08 00:36:23.509153 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-08 00:36:23.509167 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-08 00:36:23.509182 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-08 00:36:23.509194 | orchestrator | 2026-03-08 00:36:23.509208 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-08 00:36:23.509221 | orchestrator | 2026-03-08 00:36:23.509236 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-08 00:36:23.509248 | orchestrator | Sunday 08 March 2026 00:36:03 +0000 (0:00:00.761) 0:00:00.884 ********** 2026-03-08 00:36:23.509262 | orchestrator | ok: [testbed-manager] 2026-03-08 00:36:23.509277 | orchestrator | 2026-03-08 00:36:23.509291 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-08 00:36:23.509332 | orchestrator | 2026-03-08 00:36:23.509346 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-08 00:36:23.509360 | orchestrator | Sunday 08 March 2026 00:36:06 +0000 (0:00:02.249) 0:00:03.133 ********** 2026-03-08 00:36:23.509374 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:36:23.509388 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:36:23.509401 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:36:23.509415 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:36:23.509428 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:36:23.509442 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:36:23.509456 | orchestrator | 2026-03-08 00:36:23.509471 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-08 00:36:23.509483 | orchestrator | 2026-03-08 00:36:23.509498 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-08 00:36:23.509542 | orchestrator | Sunday 08 March 2026 00:36:08 +0000 (0:00:01.813) 0:00:04.947 ********** 2026-03-08 00:36:23.509558 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-08 00:36:23.509616 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-08 00:36:23.509630 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-08 00:36:23.509644 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-08 00:36:23.509657 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-08 00:36:23.509670 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-08 00:36:23.509683 | orchestrator | 2026-03-08 00:36:23.509697 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-08 00:36:23.509711 | orchestrator | Sunday 08 March 2026 00:36:09 +0000 (0:00:01.499) 0:00:06.447 ********** 2026-03-08 00:36:23.509723 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:36:23.509736 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:36:23.509749 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:36:23.509762 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:36:23.509775 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:36:23.509805 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:36:23.509818 | orchestrator | 2026-03-08 00:36:23.509832 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-08 00:36:23.509844 | orchestrator | Sunday 08 March 2026 00:36:13 +0000 (0:00:03.643) 0:00:10.090 ********** 2026-03-08 00:36:23.509857 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:36:23.509870 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:36:23.509883 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:36:23.509896 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:36:23.509909 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:36:23.509922 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:36:23.509936 | orchestrator | 2026-03-08 00:36:23.509948 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-08 00:36:23.509961 | orchestrator | 2026-03-08 00:36:23.509974 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-08 00:36:23.509987 | orchestrator | Sunday 08 March 2026 00:36:13 +0000 (0:00:00.664) 0:00:10.755 ********** 2026-03-08 00:36:23.510000 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:36:23.510076 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:36:23.510094 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:36:23.510108 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:36:23.510123 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:36:23.510136 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:36:23.510150 | orchestrator | changed: [testbed-manager] 2026-03-08 00:36:23.510163 | orchestrator | 2026-03-08 00:36:23.510177 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-08 00:36:23.510204 | orchestrator | Sunday 08 March 2026 00:36:15 +0000 (0:00:01.640) 0:00:12.396 ********** 2026-03-08 00:36:23.510217 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:36:23.510231 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:36:23.510245 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:36:23.510259 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:36:23.510273 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:36:23.510287 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:36:23.510323 | orchestrator | changed: [testbed-manager] 2026-03-08 00:36:23.510337 | orchestrator | 2026-03-08 00:36:23.510351 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-08 00:36:23.510366 | orchestrator | Sunday 08 March 2026 00:36:16 +0000 (0:00:01.485) 0:00:13.881 ********** 2026-03-08 00:36:23.510378 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:36:23.510391 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:36:23.510418 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:36:23.510431 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:36:23.510444 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:36:23.510457 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:36:23.510471 | orchestrator | ok: [testbed-manager] 2026-03-08 00:36:23.510483 | orchestrator | 2026-03-08 00:36:23.510495 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-08 00:36:23.510510 | orchestrator | Sunday 08 March 2026 00:36:18 +0000 (0:00:01.535) 0:00:15.417 ********** 2026-03-08 00:36:23.510522 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:36:23.510535 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:36:23.510547 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:36:23.510560 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:36:23.510590 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:36:23.510605 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:36:23.510617 | orchestrator | changed: [testbed-manager] 2026-03-08 00:36:23.510629 | orchestrator | 2026-03-08 00:36:23.510642 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-08 00:36:23.510655 | orchestrator | Sunday 08 March 2026 00:36:20 +0000 (0:00:01.773) 0:00:17.191 ********** 2026-03-08 00:36:23.510668 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:36:23.510681 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:36:23.510694 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:36:23.510706 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:36:23.510719 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:36:23.510733 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:36:23.510745 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:36:23.510757 | orchestrator | 2026-03-08 00:36:23.510772 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-08 00:36:23.510783 | orchestrator | 2026-03-08 00:36:23.510796 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-08 00:36:23.510809 | orchestrator | Sunday 08 March 2026 00:36:20 +0000 (0:00:00.618) 0:00:17.809 ********** 2026-03-08 00:36:23.510821 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:36:23.510834 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:36:23.510848 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:36:23.510861 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:36:23.510873 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:36:23.510894 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:36:23.510907 | orchestrator | ok: [testbed-manager] 2026-03-08 00:36:23.510920 | orchestrator | 2026-03-08 00:36:23.510934 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:36:23.510948 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:36:23.510962 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:36:23.510984 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:36:23.510998 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:36:23.511012 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:36:23.511025 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:36:23.511037 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:36:23.511050 | orchestrator | 2026-03-08 00:36:23.511077 | orchestrator | 2026-03-08 00:36:23.511091 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:36:23.511105 | orchestrator | Sunday 08 March 2026 00:36:23 +0000 (0:00:02.580) 0:00:20.390 ********** 2026-03-08 00:36:23.511117 | orchestrator | =============================================================================== 2026-03-08 00:36:23.511130 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.64s 2026-03-08 00:36:23.511142 | orchestrator | Install python3-docker -------------------------------------------------- 2.58s 2026-03-08 00:36:23.511155 | orchestrator | Apply netplan configuration --------------------------------------------- 2.25s 2026-03-08 00:36:23.511168 | orchestrator | Apply netplan configuration --------------------------------------------- 1.81s 2026-03-08 00:36:23.511182 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.77s 2026-03-08 00:36:23.511195 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.64s 2026-03-08 00:36:23.511207 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.54s 2026-03-08 00:36:23.511221 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.50s 2026-03-08 00:36:23.511233 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.49s 2026-03-08 00:36:23.511248 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.76s 2026-03-08 00:36:23.511260 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.66s 2026-03-08 00:36:23.511282 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.62s 2026-03-08 00:36:24.128426 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-08 00:36:36.095166 | orchestrator | 2026-03-08 00:36:36 | INFO  | Task bb3195cc-b2c4-41f1-ac77-924ed68b6cf6 (reboot) was prepared for execution. 2026-03-08 00:36:36.095302 | orchestrator | 2026-03-08 00:36:36 | INFO  | It takes a moment until task bb3195cc-b2c4-41f1-ac77-924ed68b6cf6 (reboot) has been started and output is visible here. 2026-03-08 00:36:46.045818 | orchestrator | 2026-03-08 00:36:46.045937 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-08 00:36:46.045956 | orchestrator | 2026-03-08 00:36:46.045968 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-08 00:36:46.045980 | orchestrator | Sunday 08 March 2026 00:36:40 +0000 (0:00:00.201) 0:00:00.201 ********** 2026-03-08 00:36:46.045991 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:36:46.046003 | orchestrator | 2026-03-08 00:36:46.046079 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-08 00:36:46.046092 | orchestrator | Sunday 08 March 2026 00:36:40 +0000 (0:00:00.103) 0:00:00.304 ********** 2026-03-08 00:36:46.046166 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:36:46.046180 | orchestrator | 2026-03-08 00:36:46.046191 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-08 00:36:46.046203 | orchestrator | Sunday 08 March 2026 00:36:41 +0000 (0:00:00.902) 0:00:01.207 ********** 2026-03-08 00:36:46.046238 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:36:46.046250 | orchestrator | 2026-03-08 00:36:46.046261 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-08 00:36:46.046272 | orchestrator | 2026-03-08 00:36:46.046283 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-08 00:36:46.046294 | orchestrator | Sunday 08 March 2026 00:36:41 +0000 (0:00:00.125) 0:00:01.332 ********** 2026-03-08 00:36:46.046305 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:36:46.046316 | orchestrator | 2026-03-08 00:36:46.046327 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-08 00:36:46.046338 | orchestrator | Sunday 08 March 2026 00:36:41 +0000 (0:00:00.100) 0:00:01.433 ********** 2026-03-08 00:36:46.046351 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:36:46.046364 | orchestrator | 2026-03-08 00:36:46.046391 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-08 00:36:46.046404 | orchestrator | Sunday 08 March 2026 00:36:41 +0000 (0:00:00.634) 0:00:02.068 ********** 2026-03-08 00:36:46.046417 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:36:46.046430 | orchestrator | 2026-03-08 00:36:46.046444 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-08 00:36:46.046455 | orchestrator | 2026-03-08 00:36:46.046466 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-08 00:36:46.046477 | orchestrator | Sunday 08 March 2026 00:36:42 +0000 (0:00:00.109) 0:00:02.178 ********** 2026-03-08 00:36:46.046488 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:36:46.046499 | orchestrator | 2026-03-08 00:36:46.046510 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-08 00:36:46.046592 | orchestrator | Sunday 08 March 2026 00:36:42 +0000 (0:00:00.196) 0:00:02.375 ********** 2026-03-08 00:36:46.046611 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:36:46.046631 | orchestrator | 2026-03-08 00:36:46.046651 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-08 00:36:46.046669 | orchestrator | Sunday 08 March 2026 00:36:43 +0000 (0:00:00.704) 0:00:03.079 ********** 2026-03-08 00:36:46.046684 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:36:46.046695 | orchestrator | 2026-03-08 00:36:46.046706 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-08 00:36:46.046716 | orchestrator | 2026-03-08 00:36:46.046727 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-08 00:36:46.046738 | orchestrator | Sunday 08 March 2026 00:36:43 +0000 (0:00:00.107) 0:00:03.187 ********** 2026-03-08 00:36:46.046755 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:36:46.046773 | orchestrator | 2026-03-08 00:36:46.046792 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-08 00:36:46.046811 | orchestrator | Sunday 08 March 2026 00:36:43 +0000 (0:00:00.098) 0:00:03.286 ********** 2026-03-08 00:36:46.046830 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:36:46.046848 | orchestrator | 2026-03-08 00:36:46.046862 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-08 00:36:46.046873 | orchestrator | Sunday 08 March 2026 00:36:43 +0000 (0:00:00.666) 0:00:03.953 ********** 2026-03-08 00:36:46.046884 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:36:46.046895 | orchestrator | 2026-03-08 00:36:46.046909 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-08 00:36:46.046926 | orchestrator | 2026-03-08 00:36:46.046944 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-08 00:36:46.046961 | orchestrator | Sunday 08 March 2026 00:36:43 +0000 (0:00:00.102) 0:00:04.055 ********** 2026-03-08 00:36:46.046979 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:36:46.047049 | orchestrator | 2026-03-08 00:36:46.047069 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-08 00:36:46.047089 | orchestrator | Sunday 08 March 2026 00:36:44 +0000 (0:00:00.084) 0:00:04.140 ********** 2026-03-08 00:36:46.047125 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:36:46.047143 | orchestrator | 2026-03-08 00:36:46.047173 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-08 00:36:46.047184 | orchestrator | Sunday 08 March 2026 00:36:44 +0000 (0:00:00.617) 0:00:04.758 ********** 2026-03-08 00:36:46.047195 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:36:46.047206 | orchestrator | 2026-03-08 00:36:46.047218 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-08 00:36:46.047228 | orchestrator | 2026-03-08 00:36:46.047239 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-08 00:36:46.047250 | orchestrator | Sunday 08 March 2026 00:36:44 +0000 (0:00:00.142) 0:00:04.900 ********** 2026-03-08 00:36:46.047261 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:36:46.047271 | orchestrator | 2026-03-08 00:36:46.047282 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-08 00:36:46.047293 | orchestrator | Sunday 08 March 2026 00:36:44 +0000 (0:00:00.162) 0:00:05.063 ********** 2026-03-08 00:36:46.047303 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:36:46.047314 | orchestrator | 2026-03-08 00:36:46.047325 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-08 00:36:46.047336 | orchestrator | Sunday 08 March 2026 00:36:45 +0000 (0:00:00.688) 0:00:05.751 ********** 2026-03-08 00:36:46.047368 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:36:46.047380 | orchestrator | 2026-03-08 00:36:46.047390 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:36:46.047403 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:36:46.047416 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:36:46.047427 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:36:46.047437 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:36:46.047448 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:36:46.047459 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:36:46.047470 | orchestrator | 2026-03-08 00:36:46.047481 | orchestrator | 2026-03-08 00:36:46.047491 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:36:46.047502 | orchestrator | Sunday 08 March 2026 00:36:45 +0000 (0:00:00.037) 0:00:05.788 ********** 2026-03-08 00:36:46.047520 | orchestrator | =============================================================================== 2026-03-08 00:36:46.047607 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.21s 2026-03-08 00:36:46.047630 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.75s 2026-03-08 00:36:46.047649 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.63s 2026-03-08 00:36:46.383178 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-08 00:36:58.514729 | orchestrator | 2026-03-08 00:36:58 | INFO  | Task f059b8e0-f1aa-4012-8804-199c1216aa8f (wait-for-connection) was prepared for execution. 2026-03-08 00:36:58.514842 | orchestrator | 2026-03-08 00:36:58 | INFO  | It takes a moment until task f059b8e0-f1aa-4012-8804-199c1216aa8f (wait-for-connection) has been started and output is visible here. 2026-03-08 00:37:14.751416 | orchestrator | 2026-03-08 00:37:14.751557 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-08 00:37:14.751604 | orchestrator | 2026-03-08 00:37:14.751616 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-08 00:37:14.751626 | orchestrator | Sunday 08 March 2026 00:37:02 +0000 (0:00:00.237) 0:00:00.237 ********** 2026-03-08 00:37:14.751637 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:37:14.751648 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:37:14.751657 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:37:14.751667 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:37:14.751676 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:37:14.751686 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:37:14.751696 | orchestrator | 2026-03-08 00:37:14.751706 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:37:14.751716 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:37:14.751727 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:37:14.751737 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:37:14.751747 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:37:14.751756 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:37:14.751766 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:37:14.751775 | orchestrator | 2026-03-08 00:37:14.751785 | orchestrator | 2026-03-08 00:37:14.751796 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:37:14.751811 | orchestrator | Sunday 08 March 2026 00:37:14 +0000 (0:00:11.513) 0:00:11.750 ********** 2026-03-08 00:37:14.751830 | orchestrator | =============================================================================== 2026-03-08 00:37:14.751855 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.51s 2026-03-08 00:37:15.067440 | orchestrator | + osism apply hddtemp 2026-03-08 00:37:27.188410 | orchestrator | 2026-03-08 00:37:27 | INFO  | Task 3dad93a5-7fcb-408d-ad88-6521058e29d4 (hddtemp) was prepared for execution. 2026-03-08 00:37:27.188629 | orchestrator | 2026-03-08 00:37:27 | INFO  | It takes a moment until task 3dad93a5-7fcb-408d-ad88-6521058e29d4 (hddtemp) has been started and output is visible here. 2026-03-08 00:37:53.493358 | orchestrator | 2026-03-08 00:37:53.493551 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-08 00:37:53.493575 | orchestrator | 2026-03-08 00:37:53.493594 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-08 00:37:53.493612 | orchestrator | Sunday 08 March 2026 00:37:31 +0000 (0:00:00.254) 0:00:00.254 ********** 2026-03-08 00:37:53.493628 | orchestrator | ok: [testbed-manager] 2026-03-08 00:37:53.493647 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:37:53.493664 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:37:53.493679 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:37:53.493696 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:37:53.493713 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:37:53.493730 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:37:53.493746 | orchestrator | 2026-03-08 00:37:53.493762 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-08 00:37:53.493776 | orchestrator | Sunday 08 March 2026 00:37:31 +0000 (0:00:00.695) 0:00:00.949 ********** 2026-03-08 00:37:53.493797 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:37:53.493817 | orchestrator | 2026-03-08 00:37:53.493873 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-08 00:37:53.493891 | orchestrator | Sunday 08 March 2026 00:37:33 +0000 (0:00:01.146) 0:00:02.095 ********** 2026-03-08 00:37:53.493909 | orchestrator | ok: [testbed-manager] 2026-03-08 00:37:53.493926 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:37:53.493942 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:37:53.493959 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:37:53.493974 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:37:53.493991 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:37:53.494008 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:37:53.494120 | orchestrator | 2026-03-08 00:37:53.494142 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-08 00:37:53.494181 | orchestrator | Sunday 08 March 2026 00:37:34 +0000 (0:00:01.811) 0:00:03.907 ********** 2026-03-08 00:37:53.494199 | orchestrator | changed: [testbed-manager] 2026-03-08 00:37:53.494214 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:37:53.494225 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:37:53.494235 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:37:53.494245 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:37:53.494254 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:37:53.494264 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:37:53.494273 | orchestrator | 2026-03-08 00:37:53.494283 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-08 00:37:53.494292 | orchestrator | Sunday 08 March 2026 00:37:36 +0000 (0:00:01.145) 0:00:05.053 ********** 2026-03-08 00:37:53.494302 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:37:53.494311 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:37:53.494321 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:37:53.494330 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:37:53.494340 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:37:53.494349 | orchestrator | ok: [testbed-manager] 2026-03-08 00:37:53.494359 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:37:53.494368 | orchestrator | 2026-03-08 00:37:53.494378 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-08 00:37:53.494387 | orchestrator | Sunday 08 March 2026 00:37:37 +0000 (0:00:01.126) 0:00:06.179 ********** 2026-03-08 00:37:53.494397 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:37:53.494406 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:37:53.494416 | orchestrator | changed: [testbed-manager] 2026-03-08 00:37:53.494425 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:37:53.494435 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:37:53.494444 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:37:53.494486 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:37:53.494496 | orchestrator | 2026-03-08 00:37:53.494506 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-08 00:37:53.494516 | orchestrator | Sunday 08 March 2026 00:37:38 +0000 (0:00:00.830) 0:00:07.009 ********** 2026-03-08 00:37:53.494525 | orchestrator | changed: [testbed-manager] 2026-03-08 00:37:53.494535 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:37:53.494545 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:37:53.494554 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:37:53.494563 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:37:53.494573 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:37:53.494584 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:37:53.494600 | orchestrator | 2026-03-08 00:37:53.494616 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-08 00:37:53.494632 | orchestrator | Sunday 08 March 2026 00:37:50 +0000 (0:00:12.212) 0:00:19.222 ********** 2026-03-08 00:37:53.494649 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:37:53.494666 | orchestrator | 2026-03-08 00:37:53.494681 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-08 00:37:53.494714 | orchestrator | Sunday 08 March 2026 00:37:51 +0000 (0:00:01.171) 0:00:20.394 ********** 2026-03-08 00:37:53.494731 | orchestrator | changed: [testbed-manager] 2026-03-08 00:37:53.494741 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:37:53.494751 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:37:53.494762 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:37:53.494778 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:37:53.494793 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:37:53.494809 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:37:53.494826 | orchestrator | 2026-03-08 00:37:53.494842 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:37:53.494859 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:37:53.494908 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:37:53.494926 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:37:53.494942 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:37:53.494958 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:37:53.494974 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:37:53.494990 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:37:53.495007 | orchestrator | 2026-03-08 00:37:53.495022 | orchestrator | 2026-03-08 00:37:53.495040 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:37:53.495051 | orchestrator | Sunday 08 March 2026 00:37:53 +0000 (0:00:01.753) 0:00:22.148 ********** 2026-03-08 00:37:53.495060 | orchestrator | =============================================================================== 2026-03-08 00:37:53.495070 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.21s 2026-03-08 00:37:53.495079 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.81s 2026-03-08 00:37:53.495096 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.75s 2026-03-08 00:37:53.495106 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.17s 2026-03-08 00:37:53.495116 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.15s 2026-03-08 00:37:53.495125 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.15s 2026-03-08 00:37:53.495134 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.13s 2026-03-08 00:37:53.495144 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.83s 2026-03-08 00:37:53.495153 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.70s 2026-03-08 00:37:53.779192 | orchestrator | ++ semver 9.5.0 7.1.1 2026-03-08 00:37:53.823985 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-08 00:37:53.824111 | orchestrator | + sudo systemctl restart manager.service 2026-03-08 00:38:06.904373 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-08 00:38:06.904537 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-08 00:38:06.904557 | orchestrator | + local max_attempts=60 2026-03-08 00:38:06.904571 | orchestrator | + local name=ceph-ansible 2026-03-08 00:38:06.904582 | orchestrator | + local attempt_num=1 2026-03-08 00:38:06.904594 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:38:06.932390 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-08 00:38:06.932535 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-08 00:38:06.932553 | orchestrator | + sleep 5 2026-03-08 00:38:11.936262 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:38:11.994599 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-08 00:38:11.994696 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-08 00:38:11.994712 | orchestrator | + sleep 5 2026-03-08 00:38:16.998003 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:38:17.032507 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-08 00:38:17.032603 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-08 00:38:17.032619 | orchestrator | + sleep 5 2026-03-08 00:38:22.036705 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:38:22.075725 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-08 00:38:22.075820 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-08 00:38:22.075835 | orchestrator | + sleep 5 2026-03-08 00:38:27.079691 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:38:27.115381 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-08 00:38:27.115520 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-08 00:38:27.115535 | orchestrator | + sleep 5 2026-03-08 00:38:32.119842 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:38:32.155485 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-08 00:38:32.155554 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-08 00:38:32.155560 | orchestrator | + sleep 5 2026-03-08 00:38:37.160297 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:38:37.201829 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-08 00:38:37.201929 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-08 00:38:37.201946 | orchestrator | + sleep 5 2026-03-08 00:38:42.205587 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:38:42.244503 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-08 00:38:42.244615 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-08 00:38:42.244639 | orchestrator | + sleep 5 2026-03-08 00:38:47.246257 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:38:47.262731 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-08 00:38:47.262799 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-08 00:38:47.262808 | orchestrator | + sleep 5 2026-03-08 00:38:52.266703 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:38:52.310075 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-08 00:38:52.310169 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-08 00:38:52.310184 | orchestrator | + sleep 5 2026-03-08 00:38:57.314832 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:38:57.353543 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-08 00:38:57.353639 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-08 00:38:57.353655 | orchestrator | + sleep 5 2026-03-08 00:39:02.358342 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:39:02.394124 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-08 00:39:02.394232 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-08 00:39:02.394250 | orchestrator | + sleep 5 2026-03-08 00:39:07.397879 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:39:07.432814 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-08 00:39:07.432892 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-08 00:39:07.432909 | orchestrator | + sleep 5 2026-03-08 00:39:12.437339 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:39:12.473461 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-08 00:39:12.473553 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-08 00:39:12.473569 | orchestrator | + local max_attempts=60 2026-03-08 00:39:12.473584 | orchestrator | + local name=kolla-ansible 2026-03-08 00:39:12.473596 | orchestrator | + local attempt_num=1 2026-03-08 00:39:12.473609 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-08 00:39:12.506600 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-08 00:39:12.506700 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-08 00:39:12.506724 | orchestrator | + local max_attempts=60 2026-03-08 00:39:12.506745 | orchestrator | + local name=osism-ansible 2026-03-08 00:39:12.506800 | orchestrator | + local attempt_num=1 2026-03-08 00:39:12.506821 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-08 00:39:12.530995 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-08 00:39:12.531066 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-08 00:39:12.531080 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-08 00:39:12.676565 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-08 00:39:12.824133 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-08 00:39:12.966823 | orchestrator | ARA in osism-ansible already disabled. 2026-03-08 00:39:13.106245 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-08 00:39:13.106342 | orchestrator | + osism apply gather-facts 2026-03-08 00:39:24.901182 | orchestrator | 2026-03-08 00:39:24 | INFO  | Task 0b152545-f095-4d18-b535-d44beb774a24 (gather-facts) was prepared for execution. 2026-03-08 00:39:24.901294 | orchestrator | 2026-03-08 00:39:24 | INFO  | It takes a moment until task 0b152545-f095-4d18-b535-d44beb774a24 (gather-facts) has been started and output is visible here. 2026-03-08 00:39:37.618610 | orchestrator | 2026-03-08 00:39:37.618688 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-08 00:39:37.618703 | orchestrator | 2026-03-08 00:39:37.618715 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-08 00:39:37.618726 | orchestrator | Sunday 08 March 2026 00:39:28 +0000 (0:00:00.160) 0:00:00.160 ********** 2026-03-08 00:39:37.618738 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:39:37.618752 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:39:37.618763 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:39:37.618775 | orchestrator | ok: [testbed-manager] 2026-03-08 00:39:37.618786 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:39:37.618798 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:39:37.618809 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:39:37.618820 | orchestrator | 2026-03-08 00:39:37.618832 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-08 00:39:37.618843 | orchestrator | 2026-03-08 00:39:37.618855 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-08 00:39:37.618867 | orchestrator | Sunday 08 March 2026 00:39:36 +0000 (0:00:08.191) 0:00:08.352 ********** 2026-03-08 00:39:37.618878 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:39:37.618890 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:39:37.618902 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:39:37.618913 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:39:37.618925 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:39:37.618936 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:39:37.618947 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:39:37.618959 | orchestrator | 2026-03-08 00:39:37.618970 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:39:37.618982 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:39:37.618994 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:39:37.619006 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:39:37.619018 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:39:37.619029 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:39:37.619041 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:39:37.619052 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:39:37.619085 | orchestrator | 2026-03-08 00:39:37.619097 | orchestrator | 2026-03-08 00:39:37.619108 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:39:37.619120 | orchestrator | Sunday 08 March 2026 00:39:37 +0000 (0:00:00.504) 0:00:08.857 ********** 2026-03-08 00:39:37.619131 | orchestrator | =============================================================================== 2026-03-08 00:39:37.619142 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.19s 2026-03-08 00:39:37.619154 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2026-03-08 00:39:37.901252 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-08 00:39:37.912412 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-08 00:39:37.922424 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-08 00:39:37.935953 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-08 00:39:37.946536 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-08 00:39:37.959700 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-08 00:39:37.971660 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-08 00:39:37.982786 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-08 00:39:37.992178 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-08 00:39:38.001805 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-08 00:39:38.012128 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-08 00:39:38.021236 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-08 00:39:38.031059 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-08 00:39:38.044694 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-08 00:39:38.061279 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-08 00:39:38.070529 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-08 00:39:38.081210 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-08 00:39:38.093519 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-08 00:39:38.106184 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-08 00:39:38.120392 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-08 00:39:38.133700 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-08 00:39:38.151118 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-08 00:39:38.168020 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-08 00:39:38.182165 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-08 00:39:38.555884 | orchestrator | ok: Runtime: 0:23:48.062628 2026-03-08 00:39:38.660707 | 2026-03-08 00:39:38.660854 | TASK [Deploy services] 2026-03-08 00:39:39.195877 | orchestrator | skipping: Conditional result was False 2026-03-08 00:39:39.214282 | 2026-03-08 00:39:39.214483 | TASK [Deploy in a nutshell] 2026-03-08 00:39:39.904323 | orchestrator | + set -e 2026-03-08 00:39:39.905889 | orchestrator | 2026-03-08 00:39:39.905906 | orchestrator | # PULL IMAGES 2026-03-08 00:39:39.905912 | orchestrator | 2026-03-08 00:39:39.905921 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-08 00:39:39.905929 | orchestrator | ++ export INTERACTIVE=false 2026-03-08 00:39:39.905935 | orchestrator | ++ INTERACTIVE=false 2026-03-08 00:39:39.905954 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-08 00:39:39.905964 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-08 00:39:39.905970 | orchestrator | + source /opt/manager-vars.sh 2026-03-08 00:39:39.905974 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-08 00:39:39.905982 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-08 00:39:39.905994 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-08 00:39:39.906001 | orchestrator | ++ CEPH_VERSION=reef 2026-03-08 00:39:39.906005 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-08 00:39:39.906031 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-08 00:39:39.906037 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-08 00:39:39.906043 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-08 00:39:39.906047 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-08 00:39:39.906052 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-08 00:39:39.906056 | orchestrator | ++ export ARA=false 2026-03-08 00:39:39.906061 | orchestrator | ++ ARA=false 2026-03-08 00:39:39.906065 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-08 00:39:39.906069 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-08 00:39:39.906073 | orchestrator | ++ export TEMPEST=true 2026-03-08 00:39:39.906077 | orchestrator | ++ TEMPEST=true 2026-03-08 00:39:39.906081 | orchestrator | ++ export IS_ZUUL=true 2026-03-08 00:39:39.906085 | orchestrator | ++ IS_ZUUL=true 2026-03-08 00:39:39.906090 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.24 2026-03-08 00:39:39.906094 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.24 2026-03-08 00:39:39.906098 | orchestrator | ++ export EXTERNAL_API=false 2026-03-08 00:39:39.906102 | orchestrator | ++ EXTERNAL_API=false 2026-03-08 00:39:39.906106 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-08 00:39:39.906110 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-08 00:39:39.906115 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-08 00:39:39.906119 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-08 00:39:39.906123 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-08 00:39:39.906130 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-08 00:39:39.906135 | orchestrator | + echo 2026-03-08 00:39:39.906139 | orchestrator | + echo '# PULL IMAGES' 2026-03-08 00:39:39.906143 | orchestrator | + echo 2026-03-08 00:39:39.906151 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-08 00:39:39.957637 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-08 00:39:39.957732 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-08 00:39:41.615693 | orchestrator | 2026-03-08 00:39:41 | INFO  | Trying to run play pull-images in environment custom 2026-03-08 00:39:51.797582 | orchestrator | 2026-03-08 00:39:51 | INFO  | Task 2d87fd43-fce3-49ee-bba9-9f31d2bcd1c8 (pull-images) was prepared for execution. 2026-03-08 00:39:51.797657 | orchestrator | 2026-03-08 00:39:51 | INFO  | Task 2d87fd43-fce3-49ee-bba9-9f31d2bcd1c8 is running in background. No more output. Check ARA for logs. 2026-03-08 00:39:53.739494 | orchestrator | 2026-03-08 00:39:53 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-08 00:40:03.963069 | orchestrator | 2026-03-08 00:40:03 | INFO  | Task 102cd731-97d7-417d-ac81-2aee26880bf6 (wipe-partitions) was prepared for execution. 2026-03-08 00:40:03.963172 | orchestrator | 2026-03-08 00:40:03 | INFO  | It takes a moment until task 102cd731-97d7-417d-ac81-2aee26880bf6 (wipe-partitions) has been started and output is visible here. 2026-03-08 00:40:15.607920 | orchestrator | 2026-03-08 00:40:15.608054 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-08 00:40:15.608070 | orchestrator | 2026-03-08 00:40:15.608080 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-08 00:40:15.608099 | orchestrator | Sunday 08 March 2026 00:40:08 +0000 (0:00:00.125) 0:00:00.125 ********** 2026-03-08 00:40:15.608109 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:40:15.608120 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:40:15.608131 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:40:15.608141 | orchestrator | 2026-03-08 00:40:15.608151 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-08 00:40:15.608193 | orchestrator | Sunday 08 March 2026 00:40:08 +0000 (0:00:00.561) 0:00:00.686 ********** 2026-03-08 00:40:15.608204 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:15.608213 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:40:15.608223 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:40:15.608238 | orchestrator | 2026-03-08 00:40:15.608248 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-08 00:40:15.608258 | orchestrator | Sunday 08 March 2026 00:40:08 +0000 (0:00:00.296) 0:00:00.983 ********** 2026-03-08 00:40:15.608268 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:40:15.608279 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:40:15.608288 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:40:15.608298 | orchestrator | 2026-03-08 00:40:15.608308 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-08 00:40:15.608362 | orchestrator | Sunday 08 March 2026 00:40:09 +0000 (0:00:00.487) 0:00:01.471 ********** 2026-03-08 00:40:15.608372 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:15.608381 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:40:15.608391 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:40:15.608401 | orchestrator | 2026-03-08 00:40:15.608411 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-08 00:40:15.608422 | orchestrator | Sunday 08 March 2026 00:40:09 +0000 (0:00:00.248) 0:00:01.719 ********** 2026-03-08 00:40:15.608434 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-08 00:40:15.608449 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-08 00:40:15.608461 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-08 00:40:15.608472 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-08 00:40:15.608483 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-08 00:40:15.608494 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-08 00:40:15.608506 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-08 00:40:15.608517 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-08 00:40:15.608528 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-08 00:40:15.608540 | orchestrator | 2026-03-08 00:40:15.608550 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-08 00:40:15.608559 | orchestrator | Sunday 08 March 2026 00:40:10 +0000 (0:00:01.077) 0:00:02.797 ********** 2026-03-08 00:40:15.608570 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-08 00:40:15.608580 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-08 00:40:15.608589 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-08 00:40:15.608599 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-08 00:40:15.608608 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-08 00:40:15.608618 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-08 00:40:15.608627 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-08 00:40:15.608637 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-08 00:40:15.608647 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-08 00:40:15.608656 | orchestrator | 2026-03-08 00:40:15.608666 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-08 00:40:15.608675 | orchestrator | Sunday 08 March 2026 00:40:12 +0000 (0:00:01.570) 0:00:04.367 ********** 2026-03-08 00:40:15.608685 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-08 00:40:15.608694 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-08 00:40:15.608704 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-08 00:40:15.608714 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-08 00:40:15.608723 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-08 00:40:15.608733 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-08 00:40:15.608742 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-08 00:40:15.608759 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-08 00:40:15.608777 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-08 00:40:15.608787 | orchestrator | 2026-03-08 00:40:15.608797 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-08 00:40:15.608806 | orchestrator | Sunday 08 March 2026 00:40:14 +0000 (0:00:01.915) 0:00:06.282 ********** 2026-03-08 00:40:15.608816 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:40:15.608826 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:40:15.608835 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:40:15.608845 | orchestrator | 2026-03-08 00:40:15.608862 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-08 00:40:15.608880 | orchestrator | Sunday 08 March 2026 00:40:14 +0000 (0:00:00.530) 0:00:06.813 ********** 2026-03-08 00:40:15.608898 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:40:15.608915 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:40:15.608932 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:40:15.608949 | orchestrator | 2026-03-08 00:40:15.608967 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:40:15.608985 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:40:15.609006 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:40:15.609045 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:40:15.609055 | orchestrator | 2026-03-08 00:40:15.609065 | orchestrator | 2026-03-08 00:40:15.609074 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:40:15.609084 | orchestrator | Sunday 08 March 2026 00:40:15 +0000 (0:00:00.560) 0:00:07.373 ********** 2026-03-08 00:40:15.609094 | orchestrator | =============================================================================== 2026-03-08 00:40:15.609104 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 1.92s 2026-03-08 00:40:15.609113 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.57s 2026-03-08 00:40:15.609130 | orchestrator | Check device availability ----------------------------------------------- 1.08s 2026-03-08 00:40:15.609146 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.56s 2026-03-08 00:40:15.609162 | orchestrator | Request device events from the kernel ----------------------------------- 0.56s 2026-03-08 00:40:15.609178 | orchestrator | Reload udev rules ------------------------------------------------------- 0.53s 2026-03-08 00:40:15.609194 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.49s 2026-03-08 00:40:15.609208 | orchestrator | Remove all rook related logical devices --------------------------------- 0.30s 2026-03-08 00:40:15.609224 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2026-03-08 00:40:27.789458 | orchestrator | 2026-03-08 00:40:27 | INFO  | Task 304bf37f-0861-44f1-94cf-35c85dbf72c8 (facts) was prepared for execution. 2026-03-08 00:40:27.789561 | orchestrator | 2026-03-08 00:40:27 | INFO  | It takes a moment until task 304bf37f-0861-44f1-94cf-35c85dbf72c8 (facts) has been started and output is visible here. 2026-03-08 00:40:39.420097 | orchestrator | 2026-03-08 00:40:39.420203 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-08 00:40:39.420219 | orchestrator | 2026-03-08 00:40:39.420231 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-08 00:40:39.420243 | orchestrator | Sunday 08 March 2026 00:40:31 +0000 (0:00:00.230) 0:00:00.230 ********** 2026-03-08 00:40:39.420255 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:40:39.420266 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:40:39.420277 | orchestrator | ok: [testbed-manager] 2026-03-08 00:40:39.420288 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:40:39.420366 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:40:39.420387 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:40:39.420398 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:40:39.420408 | orchestrator | 2026-03-08 00:40:39.420419 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-08 00:40:39.420430 | orchestrator | Sunday 08 March 2026 00:40:32 +0000 (0:00:00.982) 0:00:01.212 ********** 2026-03-08 00:40:39.420440 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:40:39.420452 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:40:39.420464 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:40:39.420475 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:40:39.420485 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:39.420496 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:40:39.420507 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:40:39.420517 | orchestrator | 2026-03-08 00:40:39.420528 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-08 00:40:39.420538 | orchestrator | 2026-03-08 00:40:39.420549 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-08 00:40:39.420559 | orchestrator | Sunday 08 March 2026 00:40:34 +0000 (0:00:01.107) 0:00:02.320 ********** 2026-03-08 00:40:39.420570 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:40:39.420580 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:40:39.420591 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:40:39.420602 | orchestrator | ok: [testbed-manager] 2026-03-08 00:40:39.420613 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:40:39.420623 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:40:39.420634 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:40:39.420644 | orchestrator | 2026-03-08 00:40:39.420655 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-08 00:40:39.420666 | orchestrator | 2026-03-08 00:40:39.420676 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-08 00:40:39.420687 | orchestrator | Sunday 08 March 2026 00:40:38 +0000 (0:00:04.539) 0:00:06.859 ********** 2026-03-08 00:40:39.420698 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:40:39.420708 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:40:39.420719 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:40:39.420729 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:40:39.420755 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:39.420767 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:40:39.420778 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:40:39.420788 | orchestrator | 2026-03-08 00:40:39.420799 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:40:39.420810 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:40:39.420822 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:40:39.420833 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:40:39.420844 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:40:39.420854 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:40:39.420865 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:40:39.420876 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:40:39.420886 | orchestrator | 2026-03-08 00:40:39.420898 | orchestrator | 2026-03-08 00:40:39.420908 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:40:39.420927 | orchestrator | Sunday 08 March 2026 00:40:39 +0000 (0:00:00.513) 0:00:07.373 ********** 2026-03-08 00:40:39.420938 | orchestrator | =============================================================================== 2026-03-08 00:40:39.420948 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.54s 2026-03-08 00:40:39.420959 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.11s 2026-03-08 00:40:39.420970 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.98s 2026-03-08 00:40:39.420981 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2026-03-08 00:40:41.676661 | orchestrator | 2026-03-08 00:40:41 | INFO  | Task 33c1de0f-f11f-47f2-a333-96fdd6eb9925 (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-08 00:40:41.676760 | orchestrator | 2026-03-08 00:40:41 | INFO  | It takes a moment until task 33c1de0f-f11f-47f2-a333-96fdd6eb9925 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-08 00:40:52.425195 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-08 00:40:52.425276 | orchestrator | 2.16.14 2026-03-08 00:40:52.425287 | orchestrator | 2026-03-08 00:40:52.425295 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-08 00:40:52.425302 | orchestrator | 2026-03-08 00:40:52.425308 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-08 00:40:52.425316 | orchestrator | Sunday 08 March 2026 00:40:46 +0000 (0:00:00.350) 0:00:00.350 ********** 2026-03-08 00:40:52.425324 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-08 00:40:52.425331 | orchestrator | 2026-03-08 00:40:52.425337 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-08 00:40:52.425344 | orchestrator | Sunday 08 March 2026 00:40:46 +0000 (0:00:00.259) 0:00:00.610 ********** 2026-03-08 00:40:52.425387 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:40:52.425393 | orchestrator | 2026-03-08 00:40:52.425400 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:40:52.425406 | orchestrator | Sunday 08 March 2026 00:40:46 +0000 (0:00:00.209) 0:00:00.819 ********** 2026-03-08 00:40:52.425413 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-08 00:40:52.425419 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-08 00:40:52.425426 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-08 00:40:52.425432 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-08 00:40:52.425438 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-08 00:40:52.425444 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-08 00:40:52.425450 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-08 00:40:52.425457 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-08 00:40:52.425463 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-08 00:40:52.425469 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-08 00:40:52.425475 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-08 00:40:52.425481 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-08 00:40:52.425494 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-08 00:40:52.425500 | orchestrator | 2026-03-08 00:40:52.425506 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:40:52.425512 | orchestrator | Sunday 08 March 2026 00:40:46 +0000 (0:00:00.402) 0:00:01.222 ********** 2026-03-08 00:40:52.425536 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.425542 | orchestrator | 2026-03-08 00:40:52.425548 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:40:52.425554 | orchestrator | Sunday 08 March 2026 00:40:47 +0000 (0:00:00.173) 0:00:01.395 ********** 2026-03-08 00:40:52.425561 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.425567 | orchestrator | 2026-03-08 00:40:52.425573 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:40:52.425579 | orchestrator | Sunday 08 March 2026 00:40:47 +0000 (0:00:00.167) 0:00:01.563 ********** 2026-03-08 00:40:52.425585 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.425591 | orchestrator | 2026-03-08 00:40:52.425597 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:40:52.425603 | orchestrator | Sunday 08 March 2026 00:40:47 +0000 (0:00:00.186) 0:00:01.750 ********** 2026-03-08 00:40:52.425612 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.425618 | orchestrator | 2026-03-08 00:40:52.425624 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:40:52.425630 | orchestrator | Sunday 08 March 2026 00:40:47 +0000 (0:00:00.194) 0:00:01.945 ********** 2026-03-08 00:40:52.425636 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.425643 | orchestrator | 2026-03-08 00:40:52.425649 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:40:52.425655 | orchestrator | Sunday 08 March 2026 00:40:47 +0000 (0:00:00.188) 0:00:02.134 ********** 2026-03-08 00:40:52.425661 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.425667 | orchestrator | 2026-03-08 00:40:52.425673 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:40:52.425679 | orchestrator | Sunday 08 March 2026 00:40:48 +0000 (0:00:00.197) 0:00:02.332 ********** 2026-03-08 00:40:52.425685 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.425692 | orchestrator | 2026-03-08 00:40:52.425698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:40:52.425704 | orchestrator | Sunday 08 March 2026 00:40:48 +0000 (0:00:00.181) 0:00:02.513 ********** 2026-03-08 00:40:52.425710 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.425716 | orchestrator | 2026-03-08 00:40:52.425722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:40:52.425728 | orchestrator | Sunday 08 March 2026 00:40:48 +0000 (0:00:00.178) 0:00:02.692 ********** 2026-03-08 00:40:52.425735 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6) 2026-03-08 00:40:52.425741 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6) 2026-03-08 00:40:52.425747 | orchestrator | 2026-03-08 00:40:52.425754 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:40:52.425772 | orchestrator | Sunday 08 March 2026 00:40:48 +0000 (0:00:00.381) 0:00:03.074 ********** 2026-03-08 00:40:52.425779 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9d9faa04-2f3c-436d-9a5f-1631de10dde0) 2026-03-08 00:40:52.425785 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9d9faa04-2f3c-436d-9a5f-1631de10dde0) 2026-03-08 00:40:52.425791 | orchestrator | 2026-03-08 00:40:52.425797 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:40:52.425804 | orchestrator | Sunday 08 March 2026 00:40:49 +0000 (0:00:00.527) 0:00:03.602 ********** 2026-03-08 00:40:52.425809 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_13127977-6a78-466e-81ef-45b79edafbaf) 2026-03-08 00:40:52.425816 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_13127977-6a78-466e-81ef-45b79edafbaf) 2026-03-08 00:40:52.425822 | orchestrator | 2026-03-08 00:40:52.425828 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:40:52.425834 | orchestrator | Sunday 08 March 2026 00:40:49 +0000 (0:00:00.510) 0:00:04.113 ********** 2026-03-08 00:40:52.425845 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4f1e8e18-dbd4-42e7-a856-4b9aa5f72ffe) 2026-03-08 00:40:52.425852 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4f1e8e18-dbd4-42e7-a856-4b9aa5f72ffe) 2026-03-08 00:40:52.425858 | orchestrator | 2026-03-08 00:40:52.425864 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:40:52.425870 | orchestrator | Sunday 08 March 2026 00:40:50 +0000 (0:00:00.667) 0:00:04.780 ********** 2026-03-08 00:40:52.425876 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-08 00:40:52.425882 | orchestrator | 2026-03-08 00:40:52.425888 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:40:52.425894 | orchestrator | Sunday 08 March 2026 00:40:50 +0000 (0:00:00.307) 0:00:05.088 ********** 2026-03-08 00:40:52.425903 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-08 00:40:52.425909 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-08 00:40:52.425915 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-08 00:40:52.425921 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-08 00:40:52.425927 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-08 00:40:52.425933 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-08 00:40:52.425939 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-08 00:40:52.425945 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-08 00:40:52.425951 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-08 00:40:52.425957 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-08 00:40:52.425963 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-08 00:40:52.425969 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-08 00:40:52.425975 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-08 00:40:52.425981 | orchestrator | 2026-03-08 00:40:52.425987 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:40:52.425993 | orchestrator | Sunday 08 March 2026 00:40:51 +0000 (0:00:00.356) 0:00:05.444 ********** 2026-03-08 00:40:52.425999 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.426005 | orchestrator | 2026-03-08 00:40:52.426053 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:40:52.426061 | orchestrator | Sunday 08 March 2026 00:40:51 +0000 (0:00:00.187) 0:00:05.632 ********** 2026-03-08 00:40:52.426067 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.426073 | orchestrator | 2026-03-08 00:40:52.426079 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:40:52.426085 | orchestrator | Sunday 08 March 2026 00:40:51 +0000 (0:00:00.184) 0:00:05.817 ********** 2026-03-08 00:40:52.426091 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.426097 | orchestrator | 2026-03-08 00:40:52.426103 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:40:52.426111 | orchestrator | Sunday 08 March 2026 00:40:51 +0000 (0:00:00.182) 0:00:06.000 ********** 2026-03-08 00:40:52.426122 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.426132 | orchestrator | 2026-03-08 00:40:52.426143 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:40:52.426153 | orchestrator | Sunday 08 March 2026 00:40:51 +0000 (0:00:00.209) 0:00:06.210 ********** 2026-03-08 00:40:52.426162 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.426179 | orchestrator | 2026-03-08 00:40:52.426189 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:40:52.426197 | orchestrator | Sunday 08 March 2026 00:40:52 +0000 (0:00:00.179) 0:00:06.390 ********** 2026-03-08 00:40:52.426207 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.426217 | orchestrator | 2026-03-08 00:40:52.426227 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:40:52.426237 | orchestrator | Sunday 08 March 2026 00:40:52 +0000 (0:00:00.172) 0:00:06.562 ********** 2026-03-08 00:40:52.426246 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.426257 | orchestrator | 2026-03-08 00:40:52.426274 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:40:59.071288 | orchestrator | Sunday 08 March 2026 00:40:52 +0000 (0:00:00.176) 0:00:06.739 ********** 2026-03-08 00:40:59.071447 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:59.071468 | orchestrator | 2026-03-08 00:40:59.071482 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:40:59.071493 | orchestrator | Sunday 08 March 2026 00:40:52 +0000 (0:00:00.189) 0:00:06.928 ********** 2026-03-08 00:40:59.071504 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-08 00:40:59.071520 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-08 00:40:59.071539 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-08 00:40:59.071570 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-08 00:40:59.071591 | orchestrator | 2026-03-08 00:40:59.071611 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:40:59.071630 | orchestrator | Sunday 08 March 2026 00:40:53 +0000 (0:00:00.828) 0:00:07.757 ********** 2026-03-08 00:40:59.071651 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:59.071672 | orchestrator | 2026-03-08 00:40:59.071693 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:40:59.071714 | orchestrator | Sunday 08 March 2026 00:40:53 +0000 (0:00:00.198) 0:00:07.955 ********** 2026-03-08 00:40:59.071735 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:59.071751 | orchestrator | 2026-03-08 00:40:59.071762 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:40:59.071773 | orchestrator | Sunday 08 March 2026 00:40:53 +0000 (0:00:00.181) 0:00:08.137 ********** 2026-03-08 00:40:59.071784 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:59.071795 | orchestrator | 2026-03-08 00:40:59.071806 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:40:59.071817 | orchestrator | Sunday 08 March 2026 00:40:54 +0000 (0:00:00.187) 0:00:08.324 ********** 2026-03-08 00:40:59.071830 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:59.071843 | orchestrator | 2026-03-08 00:40:59.071856 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-08 00:40:59.071869 | orchestrator | Sunday 08 March 2026 00:40:54 +0000 (0:00:00.182) 0:00:08.507 ********** 2026-03-08 00:40:59.071881 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-08 00:40:59.071895 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-08 00:40:59.071906 | orchestrator | 2026-03-08 00:40:59.071917 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-08 00:40:59.071928 | orchestrator | Sunday 08 March 2026 00:40:54 +0000 (0:00:00.145) 0:00:08.653 ********** 2026-03-08 00:40:59.071938 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:59.071949 | orchestrator | 2026-03-08 00:40:59.071960 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-08 00:40:59.071992 | orchestrator | Sunday 08 March 2026 00:40:54 +0000 (0:00:00.116) 0:00:08.769 ********** 2026-03-08 00:40:59.072004 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:59.072015 | orchestrator | 2026-03-08 00:40:59.072026 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-08 00:40:59.072037 | orchestrator | Sunday 08 March 2026 00:40:54 +0000 (0:00:00.108) 0:00:08.878 ********** 2026-03-08 00:40:59.072069 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:59.072080 | orchestrator | 2026-03-08 00:40:59.072091 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-08 00:40:59.072102 | orchestrator | Sunday 08 March 2026 00:40:54 +0000 (0:00:00.117) 0:00:08.996 ********** 2026-03-08 00:40:59.072113 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:40:59.072124 | orchestrator | 2026-03-08 00:40:59.072135 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-08 00:40:59.072146 | orchestrator | Sunday 08 March 2026 00:40:54 +0000 (0:00:00.112) 0:00:09.108 ********** 2026-03-08 00:40:59.072157 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd02f715b-f6fc-5dd9-afa3-4d404d1973db'}}) 2026-03-08 00:40:59.072168 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '06971c7f-d1d9-5519-989d-752a08544c4e'}}) 2026-03-08 00:40:59.072179 | orchestrator | 2026-03-08 00:40:59.072190 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-08 00:40:59.072200 | orchestrator | Sunday 08 March 2026 00:40:54 +0000 (0:00:00.144) 0:00:09.253 ********** 2026-03-08 00:40:59.072212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd02f715b-f6fc-5dd9-afa3-4d404d1973db'}})  2026-03-08 00:40:59.072234 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '06971c7f-d1d9-5519-989d-752a08544c4e'}})  2026-03-08 00:40:59.072252 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:59.072266 | orchestrator | 2026-03-08 00:40:59.072277 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-08 00:40:59.072288 | orchestrator | Sunday 08 March 2026 00:40:55 +0000 (0:00:00.131) 0:00:09.385 ********** 2026-03-08 00:40:59.072299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd02f715b-f6fc-5dd9-afa3-4d404d1973db'}})  2026-03-08 00:40:59.072310 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '06971c7f-d1d9-5519-989d-752a08544c4e'}})  2026-03-08 00:40:59.072320 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:59.072331 | orchestrator | 2026-03-08 00:40:59.072367 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-08 00:40:59.072380 | orchestrator | Sunday 08 March 2026 00:40:55 +0000 (0:00:00.269) 0:00:09.654 ********** 2026-03-08 00:40:59.072390 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd02f715b-f6fc-5dd9-afa3-4d404d1973db'}})  2026-03-08 00:40:59.072419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '06971c7f-d1d9-5519-989d-752a08544c4e'}})  2026-03-08 00:40:59.072431 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:59.072442 | orchestrator | 2026-03-08 00:40:59.072453 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-08 00:40:59.072468 | orchestrator | Sunday 08 March 2026 00:40:55 +0000 (0:00:00.139) 0:00:09.793 ********** 2026-03-08 00:40:59.072486 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:40:59.072504 | orchestrator | 2026-03-08 00:40:59.072522 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-08 00:40:59.072538 | orchestrator | Sunday 08 March 2026 00:40:55 +0000 (0:00:00.134) 0:00:09.927 ********** 2026-03-08 00:40:59.072554 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:40:59.072572 | orchestrator | 2026-03-08 00:40:59.072598 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-08 00:40:59.072615 | orchestrator | Sunday 08 March 2026 00:40:55 +0000 (0:00:00.143) 0:00:10.071 ********** 2026-03-08 00:40:59.072630 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:59.072647 | orchestrator | 2026-03-08 00:40:59.072665 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-08 00:40:59.072682 | orchestrator | Sunday 08 March 2026 00:40:55 +0000 (0:00:00.129) 0:00:10.201 ********** 2026-03-08 00:40:59.072715 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:59.072732 | orchestrator | 2026-03-08 00:40:59.072750 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-08 00:40:59.072768 | orchestrator | Sunday 08 March 2026 00:40:56 +0000 (0:00:00.134) 0:00:10.335 ********** 2026-03-08 00:40:59.072785 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:59.072802 | orchestrator | 2026-03-08 00:40:59.072820 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-08 00:40:59.072838 | orchestrator | Sunday 08 March 2026 00:40:56 +0000 (0:00:00.136) 0:00:10.472 ********** 2026-03-08 00:40:59.072856 | orchestrator | ok: [testbed-node-3] => { 2026-03-08 00:40:59.072874 | orchestrator |  "ceph_osd_devices": { 2026-03-08 00:40:59.072892 | orchestrator |  "sdb": { 2026-03-08 00:40:59.072910 | orchestrator |  "osd_lvm_uuid": "d02f715b-f6fc-5dd9-afa3-4d404d1973db" 2026-03-08 00:40:59.072929 | orchestrator |  }, 2026-03-08 00:40:59.072948 | orchestrator |  "sdc": { 2026-03-08 00:40:59.072966 | orchestrator |  "osd_lvm_uuid": "06971c7f-d1d9-5519-989d-752a08544c4e" 2026-03-08 00:40:59.072984 | orchestrator |  } 2026-03-08 00:40:59.073002 | orchestrator |  } 2026-03-08 00:40:59.073019 | orchestrator | } 2026-03-08 00:40:59.073039 | orchestrator | 2026-03-08 00:40:59.073059 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-08 00:40:59.073076 | orchestrator | Sunday 08 March 2026 00:40:56 +0000 (0:00:00.143) 0:00:10.616 ********** 2026-03-08 00:40:59.073092 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:59.073108 | orchestrator | 2026-03-08 00:40:59.073126 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-08 00:40:59.073143 | orchestrator | Sunday 08 March 2026 00:40:56 +0000 (0:00:00.133) 0:00:10.749 ********** 2026-03-08 00:40:59.073161 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:59.073177 | orchestrator | 2026-03-08 00:40:59.073194 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-08 00:40:59.073212 | orchestrator | Sunday 08 March 2026 00:40:56 +0000 (0:00:00.131) 0:00:10.881 ********** 2026-03-08 00:40:59.073229 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:59.073248 | orchestrator | 2026-03-08 00:40:59.073265 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-08 00:40:59.073282 | orchestrator | Sunday 08 March 2026 00:40:56 +0000 (0:00:00.117) 0:00:10.998 ********** 2026-03-08 00:40:59.073300 | orchestrator | changed: [testbed-node-3] => { 2026-03-08 00:40:59.073317 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-08 00:40:59.073335 | orchestrator |  "ceph_osd_devices": { 2026-03-08 00:40:59.073388 | orchestrator |  "sdb": { 2026-03-08 00:40:59.073407 | orchestrator |  "osd_lvm_uuid": "d02f715b-f6fc-5dd9-afa3-4d404d1973db" 2026-03-08 00:40:59.073426 | orchestrator |  }, 2026-03-08 00:40:59.073444 | orchestrator |  "sdc": { 2026-03-08 00:40:59.073463 | orchestrator |  "osd_lvm_uuid": "06971c7f-d1d9-5519-989d-752a08544c4e" 2026-03-08 00:40:59.073482 | orchestrator |  } 2026-03-08 00:40:59.073500 | orchestrator |  }, 2026-03-08 00:40:59.073516 | orchestrator |  "lvm_volumes": [ 2026-03-08 00:40:59.073533 | orchestrator |  { 2026-03-08 00:40:59.073551 | orchestrator |  "data": "osd-block-d02f715b-f6fc-5dd9-afa3-4d404d1973db", 2026-03-08 00:40:59.073569 | orchestrator |  "data_vg": "ceph-d02f715b-f6fc-5dd9-afa3-4d404d1973db" 2026-03-08 00:40:59.073586 | orchestrator |  }, 2026-03-08 00:40:59.073604 | orchestrator |  { 2026-03-08 00:40:59.073623 | orchestrator |  "data": "osd-block-06971c7f-d1d9-5519-989d-752a08544c4e", 2026-03-08 00:40:59.073642 | orchestrator |  "data_vg": "ceph-06971c7f-d1d9-5519-989d-752a08544c4e" 2026-03-08 00:40:59.073660 | orchestrator |  } 2026-03-08 00:40:59.073678 | orchestrator |  ] 2026-03-08 00:40:59.073695 | orchestrator |  } 2026-03-08 00:40:59.073714 | orchestrator | } 2026-03-08 00:40:59.073750 | orchestrator | 2026-03-08 00:40:59.073769 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-08 00:40:59.073788 | orchestrator | Sunday 08 March 2026 00:40:57 +0000 (0:00:00.323) 0:00:11.322 ********** 2026-03-08 00:40:59.073804 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-08 00:40:59.073823 | orchestrator | 2026-03-08 00:40:59.073853 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-08 00:40:59.073873 | orchestrator | 2026-03-08 00:40:59.073889 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-08 00:40:59.073906 | orchestrator | Sunday 08 March 2026 00:40:58 +0000 (0:00:01.630) 0:00:12.952 ********** 2026-03-08 00:40:59.073924 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-08 00:40:59.073941 | orchestrator | 2026-03-08 00:40:59.073960 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-08 00:40:59.073978 | orchestrator | Sunday 08 March 2026 00:40:58 +0000 (0:00:00.230) 0:00:13.183 ********** 2026-03-08 00:40:59.073996 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:40:59.074082 | orchestrator | 2026-03-08 00:40:59.074134 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:05.852595 | orchestrator | Sunday 08 March 2026 00:40:59 +0000 (0:00:00.202) 0:00:13.385 ********** 2026-03-08 00:41:05.852696 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-08 00:41:05.852710 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-08 00:41:05.852722 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-08 00:41:05.852733 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-08 00:41:05.852744 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-08 00:41:05.852755 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-08 00:41:05.852766 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-08 00:41:05.852777 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-08 00:41:05.852787 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-08 00:41:05.852798 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-08 00:41:05.852808 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-08 00:41:05.852819 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-08 00:41:05.852836 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-08 00:41:05.852847 | orchestrator | 2026-03-08 00:41:05.852860 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:05.852871 | orchestrator | Sunday 08 March 2026 00:40:59 +0000 (0:00:00.374) 0:00:13.760 ********** 2026-03-08 00:41:05.852882 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.852893 | orchestrator | 2026-03-08 00:41:05.852904 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:05.852914 | orchestrator | Sunday 08 March 2026 00:40:59 +0000 (0:00:00.184) 0:00:13.945 ********** 2026-03-08 00:41:05.852925 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.852936 | orchestrator | 2026-03-08 00:41:05.852947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:05.852958 | orchestrator | Sunday 08 March 2026 00:40:59 +0000 (0:00:00.184) 0:00:14.129 ********** 2026-03-08 00:41:05.852968 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.852979 | orchestrator | 2026-03-08 00:41:05.852990 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:05.853001 | orchestrator | Sunday 08 March 2026 00:40:59 +0000 (0:00:00.185) 0:00:14.314 ********** 2026-03-08 00:41:05.853037 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.853049 | orchestrator | 2026-03-08 00:41:05.853059 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:05.853070 | orchestrator | Sunday 08 March 2026 00:41:00 +0000 (0:00:00.156) 0:00:14.471 ********** 2026-03-08 00:41:05.853081 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.853092 | orchestrator | 2026-03-08 00:41:05.853102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:05.853113 | orchestrator | Sunday 08 March 2026 00:41:00 +0000 (0:00:00.448) 0:00:14.920 ********** 2026-03-08 00:41:05.853126 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.853139 | orchestrator | 2026-03-08 00:41:05.853152 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:05.853165 | orchestrator | Sunday 08 March 2026 00:41:00 +0000 (0:00:00.166) 0:00:15.086 ********** 2026-03-08 00:41:05.853177 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.853190 | orchestrator | 2026-03-08 00:41:05.853202 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:05.853216 | orchestrator | Sunday 08 March 2026 00:41:00 +0000 (0:00:00.164) 0:00:15.251 ********** 2026-03-08 00:41:05.853229 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.853242 | orchestrator | 2026-03-08 00:41:05.853272 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:05.853285 | orchestrator | Sunday 08 March 2026 00:41:01 +0000 (0:00:00.162) 0:00:15.413 ********** 2026-03-08 00:41:05.853298 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633) 2026-03-08 00:41:05.853312 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633) 2026-03-08 00:41:05.853324 | orchestrator | 2026-03-08 00:41:05.853361 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:05.853375 | orchestrator | Sunday 08 March 2026 00:41:01 +0000 (0:00:00.351) 0:00:15.765 ********** 2026-03-08 00:41:05.853388 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2c822381-711e-4b88-8f0f-ccd9d68009a2) 2026-03-08 00:41:05.853401 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2c822381-711e-4b88-8f0f-ccd9d68009a2) 2026-03-08 00:41:05.853414 | orchestrator | 2026-03-08 00:41:05.853426 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:05.853439 | orchestrator | Sunday 08 March 2026 00:41:01 +0000 (0:00:00.366) 0:00:16.131 ********** 2026-03-08 00:41:05.853453 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_272aa0da-7148-40c6-996c-fa485e579a0c) 2026-03-08 00:41:05.853466 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_272aa0da-7148-40c6-996c-fa485e579a0c) 2026-03-08 00:41:05.853476 | orchestrator | 2026-03-08 00:41:05.853487 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:05.853516 | orchestrator | Sunday 08 March 2026 00:41:02 +0000 (0:00:00.343) 0:00:16.475 ********** 2026-03-08 00:41:05.853527 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_584a8cd2-f1cf-4783-b73b-bdfda5fabfa8) 2026-03-08 00:41:05.853538 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_584a8cd2-f1cf-4783-b73b-bdfda5fabfa8) 2026-03-08 00:41:05.853549 | orchestrator | 2026-03-08 00:41:05.853560 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:05.853570 | orchestrator | Sunday 08 March 2026 00:41:02 +0000 (0:00:00.312) 0:00:16.787 ********** 2026-03-08 00:41:05.853581 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-08 00:41:05.853592 | orchestrator | 2026-03-08 00:41:05.853603 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:05.853613 | orchestrator | Sunday 08 March 2026 00:41:02 +0000 (0:00:00.403) 0:00:17.190 ********** 2026-03-08 00:41:05.853624 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-08 00:41:05.853643 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-08 00:41:05.853654 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-08 00:41:05.853664 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-08 00:41:05.853675 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-08 00:41:05.853686 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-08 00:41:05.853696 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-08 00:41:05.853707 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-08 00:41:05.853718 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-08 00:41:05.853728 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-08 00:41:05.853739 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-08 00:41:05.853749 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-08 00:41:05.853760 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-08 00:41:05.853770 | orchestrator | 2026-03-08 00:41:05.853781 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:05.853792 | orchestrator | Sunday 08 March 2026 00:41:03 +0000 (0:00:00.316) 0:00:17.507 ********** 2026-03-08 00:41:05.853803 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.853813 | orchestrator | 2026-03-08 00:41:05.853824 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:05.853835 | orchestrator | Sunday 08 March 2026 00:41:03 +0000 (0:00:00.479) 0:00:17.986 ********** 2026-03-08 00:41:05.853845 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.853856 | orchestrator | 2026-03-08 00:41:05.853867 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:05.853878 | orchestrator | Sunday 08 March 2026 00:41:03 +0000 (0:00:00.169) 0:00:18.156 ********** 2026-03-08 00:41:05.853888 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.853899 | orchestrator | 2026-03-08 00:41:05.853910 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:05.853920 | orchestrator | Sunday 08 March 2026 00:41:04 +0000 (0:00:00.163) 0:00:18.319 ********** 2026-03-08 00:41:05.853937 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.853948 | orchestrator | 2026-03-08 00:41:05.853959 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:05.853970 | orchestrator | Sunday 08 March 2026 00:41:04 +0000 (0:00:00.169) 0:00:18.489 ********** 2026-03-08 00:41:05.853980 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.853991 | orchestrator | 2026-03-08 00:41:05.854002 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:05.854066 | orchestrator | Sunday 08 March 2026 00:41:04 +0000 (0:00:00.165) 0:00:18.654 ********** 2026-03-08 00:41:05.854080 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.854091 | orchestrator | 2026-03-08 00:41:05.854102 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:05.854113 | orchestrator | Sunday 08 March 2026 00:41:04 +0000 (0:00:00.176) 0:00:18.831 ********** 2026-03-08 00:41:05.854124 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.854134 | orchestrator | 2026-03-08 00:41:05.854145 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:05.854156 | orchestrator | Sunday 08 March 2026 00:41:04 +0000 (0:00:00.220) 0:00:19.051 ********** 2026-03-08 00:41:05.854166 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.854186 | orchestrator | 2026-03-08 00:41:05.854197 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:05.854208 | orchestrator | Sunday 08 March 2026 00:41:04 +0000 (0:00:00.176) 0:00:19.228 ********** 2026-03-08 00:41:05.854236 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-08 00:41:05.854249 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-08 00:41:05.854272 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-08 00:41:05.854283 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-08 00:41:05.854294 | orchestrator | 2026-03-08 00:41:05.854304 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:05.854315 | orchestrator | Sunday 08 March 2026 00:41:05 +0000 (0:00:00.802) 0:00:20.030 ********** 2026-03-08 00:41:05.854326 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.432633 | orchestrator | 2026-03-08 00:41:11.432752 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:11.432779 | orchestrator | Sunday 08 March 2026 00:41:05 +0000 (0:00:00.138) 0:00:20.169 ********** 2026-03-08 00:41:11.432800 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.432821 | orchestrator | 2026-03-08 00:41:11.432840 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:11.432859 | orchestrator | Sunday 08 March 2026 00:41:05 +0000 (0:00:00.134) 0:00:20.303 ********** 2026-03-08 00:41:11.432874 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.432889 | orchestrator | 2026-03-08 00:41:11.432904 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:11.432919 | orchestrator | Sunday 08 March 2026 00:41:06 +0000 (0:00:00.139) 0:00:20.443 ********** 2026-03-08 00:41:11.432933 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.432948 | orchestrator | 2026-03-08 00:41:11.432963 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-08 00:41:11.432977 | orchestrator | Sunday 08 March 2026 00:41:06 +0000 (0:00:00.445) 0:00:20.888 ********** 2026-03-08 00:41:11.432992 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-08 00:41:11.433007 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-08 00:41:11.433021 | orchestrator | 2026-03-08 00:41:11.433057 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-08 00:41:11.433072 | orchestrator | Sunday 08 March 2026 00:41:06 +0000 (0:00:00.142) 0:00:21.030 ********** 2026-03-08 00:41:11.433086 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.433104 | orchestrator | 2026-03-08 00:41:11.433123 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-08 00:41:11.433143 | orchestrator | Sunday 08 March 2026 00:41:06 +0000 (0:00:00.125) 0:00:21.155 ********** 2026-03-08 00:41:11.433154 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.433167 | orchestrator | 2026-03-08 00:41:11.433180 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-08 00:41:11.433192 | orchestrator | Sunday 08 March 2026 00:41:06 +0000 (0:00:00.098) 0:00:21.254 ********** 2026-03-08 00:41:11.433219 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.433231 | orchestrator | 2026-03-08 00:41:11.433244 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-08 00:41:11.433256 | orchestrator | Sunday 08 March 2026 00:41:07 +0000 (0:00:00.098) 0:00:21.352 ********** 2026-03-08 00:41:11.433269 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:41:11.433283 | orchestrator | 2026-03-08 00:41:11.433294 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-08 00:41:11.433306 | orchestrator | Sunday 08 March 2026 00:41:07 +0000 (0:00:00.093) 0:00:21.446 ********** 2026-03-08 00:41:11.433320 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a9457a91-34ca-5e42-9332-0f1ee38194fb'}}) 2026-03-08 00:41:11.433361 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ccaad6c6-3747-58dc-9b51-af637ea3a93d'}}) 2026-03-08 00:41:11.433401 | orchestrator | 2026-03-08 00:41:11.433414 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-08 00:41:11.433426 | orchestrator | Sunday 08 March 2026 00:41:07 +0000 (0:00:00.119) 0:00:21.566 ********** 2026-03-08 00:41:11.433441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a9457a91-34ca-5e42-9332-0f1ee38194fb'}})  2026-03-08 00:41:11.433461 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ccaad6c6-3747-58dc-9b51-af637ea3a93d'}})  2026-03-08 00:41:11.433480 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.433498 | orchestrator | 2026-03-08 00:41:11.433516 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-08 00:41:11.433544 | orchestrator | Sunday 08 March 2026 00:41:07 +0000 (0:00:00.108) 0:00:21.675 ********** 2026-03-08 00:41:11.433563 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a9457a91-34ca-5e42-9332-0f1ee38194fb'}})  2026-03-08 00:41:11.433602 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ccaad6c6-3747-58dc-9b51-af637ea3a93d'}})  2026-03-08 00:41:11.433620 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.433636 | orchestrator | 2026-03-08 00:41:11.433651 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-08 00:41:11.433670 | orchestrator | Sunday 08 March 2026 00:41:07 +0000 (0:00:00.127) 0:00:21.802 ********** 2026-03-08 00:41:11.433686 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a9457a91-34ca-5e42-9332-0f1ee38194fb'}})  2026-03-08 00:41:11.433707 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ccaad6c6-3747-58dc-9b51-af637ea3a93d'}})  2026-03-08 00:41:11.433726 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.433744 | orchestrator | 2026-03-08 00:41:11.433761 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-08 00:41:11.433780 | orchestrator | Sunday 08 March 2026 00:41:07 +0000 (0:00:00.102) 0:00:21.904 ********** 2026-03-08 00:41:11.433799 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:41:11.433817 | orchestrator | 2026-03-08 00:41:11.433833 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-08 00:41:11.433845 | orchestrator | Sunday 08 March 2026 00:41:07 +0000 (0:00:00.121) 0:00:22.026 ********** 2026-03-08 00:41:11.433855 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:41:11.433866 | orchestrator | 2026-03-08 00:41:11.433876 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-08 00:41:11.433887 | orchestrator | Sunday 08 March 2026 00:41:07 +0000 (0:00:00.141) 0:00:22.167 ********** 2026-03-08 00:41:11.433919 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.433931 | orchestrator | 2026-03-08 00:41:11.433942 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-08 00:41:11.433953 | orchestrator | Sunday 08 March 2026 00:41:08 +0000 (0:00:00.272) 0:00:22.440 ********** 2026-03-08 00:41:11.433963 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.433974 | orchestrator | 2026-03-08 00:41:11.433985 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-08 00:41:11.433996 | orchestrator | Sunday 08 March 2026 00:41:08 +0000 (0:00:00.135) 0:00:22.575 ********** 2026-03-08 00:41:11.434006 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.434079 | orchestrator | 2026-03-08 00:41:11.434094 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-08 00:41:11.434105 | orchestrator | Sunday 08 March 2026 00:41:08 +0000 (0:00:00.154) 0:00:22.730 ********** 2026-03-08 00:41:11.434116 | orchestrator | ok: [testbed-node-4] => { 2026-03-08 00:41:11.434127 | orchestrator |  "ceph_osd_devices": { 2026-03-08 00:41:11.434138 | orchestrator |  "sdb": { 2026-03-08 00:41:11.434150 | orchestrator |  "osd_lvm_uuid": "a9457a91-34ca-5e42-9332-0f1ee38194fb" 2026-03-08 00:41:11.434161 | orchestrator |  }, 2026-03-08 00:41:11.434185 | orchestrator |  "sdc": { 2026-03-08 00:41:11.434196 | orchestrator |  "osd_lvm_uuid": "ccaad6c6-3747-58dc-9b51-af637ea3a93d" 2026-03-08 00:41:11.434207 | orchestrator |  } 2026-03-08 00:41:11.434217 | orchestrator |  } 2026-03-08 00:41:11.434228 | orchestrator | } 2026-03-08 00:41:11.434239 | orchestrator | 2026-03-08 00:41:11.434250 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-08 00:41:11.434261 | orchestrator | Sunday 08 March 2026 00:41:08 +0000 (0:00:00.145) 0:00:22.875 ********** 2026-03-08 00:41:11.434272 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.434282 | orchestrator | 2026-03-08 00:41:11.434293 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-08 00:41:11.434304 | orchestrator | Sunday 08 March 2026 00:41:08 +0000 (0:00:00.122) 0:00:22.998 ********** 2026-03-08 00:41:11.434366 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.434379 | orchestrator | 2026-03-08 00:41:11.434390 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-08 00:41:11.434401 | orchestrator | Sunday 08 March 2026 00:41:08 +0000 (0:00:00.131) 0:00:23.129 ********** 2026-03-08 00:41:11.434412 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.434422 | orchestrator | 2026-03-08 00:41:11.434433 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-08 00:41:11.434443 | orchestrator | Sunday 08 March 2026 00:41:08 +0000 (0:00:00.155) 0:00:23.285 ********** 2026-03-08 00:41:11.434454 | orchestrator | changed: [testbed-node-4] => { 2026-03-08 00:41:11.434465 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-08 00:41:11.434476 | orchestrator |  "ceph_osd_devices": { 2026-03-08 00:41:11.434486 | orchestrator |  "sdb": { 2026-03-08 00:41:11.434497 | orchestrator |  "osd_lvm_uuid": "a9457a91-34ca-5e42-9332-0f1ee38194fb" 2026-03-08 00:41:11.434508 | orchestrator |  }, 2026-03-08 00:41:11.434519 | orchestrator |  "sdc": { 2026-03-08 00:41:11.434529 | orchestrator |  "osd_lvm_uuid": "ccaad6c6-3747-58dc-9b51-af637ea3a93d" 2026-03-08 00:41:11.434540 | orchestrator |  } 2026-03-08 00:41:11.434551 | orchestrator |  }, 2026-03-08 00:41:11.434561 | orchestrator |  "lvm_volumes": [ 2026-03-08 00:41:11.434572 | orchestrator |  { 2026-03-08 00:41:11.434583 | orchestrator |  "data": "osd-block-a9457a91-34ca-5e42-9332-0f1ee38194fb", 2026-03-08 00:41:11.434593 | orchestrator |  "data_vg": "ceph-a9457a91-34ca-5e42-9332-0f1ee38194fb" 2026-03-08 00:41:11.434604 | orchestrator |  }, 2026-03-08 00:41:11.434614 | orchestrator |  { 2026-03-08 00:41:11.434625 | orchestrator |  "data": "osd-block-ccaad6c6-3747-58dc-9b51-af637ea3a93d", 2026-03-08 00:41:11.434636 | orchestrator |  "data_vg": "ceph-ccaad6c6-3747-58dc-9b51-af637ea3a93d" 2026-03-08 00:41:11.434646 | orchestrator |  } 2026-03-08 00:41:11.434657 | orchestrator |  ] 2026-03-08 00:41:11.434668 | orchestrator |  } 2026-03-08 00:41:11.434679 | orchestrator | } 2026-03-08 00:41:11.434689 | orchestrator | 2026-03-08 00:41:11.434700 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-08 00:41:11.434711 | orchestrator | Sunday 08 March 2026 00:41:09 +0000 (0:00:00.216) 0:00:23.501 ********** 2026-03-08 00:41:11.434721 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-08 00:41:11.434732 | orchestrator | 2026-03-08 00:41:11.434742 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-08 00:41:11.434753 | orchestrator | 2026-03-08 00:41:11.434764 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-08 00:41:11.434774 | orchestrator | Sunday 08 March 2026 00:41:10 +0000 (0:00:00.989) 0:00:24.491 ********** 2026-03-08 00:41:11.434785 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-08 00:41:11.434795 | orchestrator | 2026-03-08 00:41:11.434806 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-08 00:41:11.434825 | orchestrator | Sunday 08 March 2026 00:41:10 +0000 (0:00:00.527) 0:00:25.018 ********** 2026-03-08 00:41:11.434836 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:41:11.434847 | orchestrator | 2026-03-08 00:41:11.434858 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:11.434868 | orchestrator | Sunday 08 March 2026 00:41:11 +0000 (0:00:00.301) 0:00:25.320 ********** 2026-03-08 00:41:11.434879 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-08 00:41:11.434889 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-08 00:41:11.434908 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-08 00:41:11.434919 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-08 00:41:11.434930 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-08 00:41:11.434950 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-08 00:41:18.504181 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-08 00:41:18.504291 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-08 00:41:18.504305 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-08 00:41:18.504315 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-08 00:41:18.504396 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-08 00:41:18.504410 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-08 00:41:18.504420 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-08 00:41:18.504430 | orchestrator | 2026-03-08 00:41:18.504441 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:18.504452 | orchestrator | Sunday 08 March 2026 00:41:11 +0000 (0:00:00.415) 0:00:25.736 ********** 2026-03-08 00:41:18.504461 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.504472 | orchestrator | 2026-03-08 00:41:18.504481 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:18.504491 | orchestrator | Sunday 08 March 2026 00:41:11 +0000 (0:00:00.211) 0:00:25.947 ********** 2026-03-08 00:41:18.504500 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.504510 | orchestrator | 2026-03-08 00:41:18.504519 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:18.504528 | orchestrator | Sunday 08 March 2026 00:41:11 +0000 (0:00:00.205) 0:00:26.152 ********** 2026-03-08 00:41:18.504538 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.504547 | orchestrator | 2026-03-08 00:41:18.504557 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:18.504566 | orchestrator | Sunday 08 March 2026 00:41:12 +0000 (0:00:00.171) 0:00:26.324 ********** 2026-03-08 00:41:18.504575 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.504585 | orchestrator | 2026-03-08 00:41:18.504594 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:18.504603 | orchestrator | Sunday 08 March 2026 00:41:12 +0000 (0:00:00.165) 0:00:26.490 ********** 2026-03-08 00:41:18.504613 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.504622 | orchestrator | 2026-03-08 00:41:18.504634 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:18.504645 | orchestrator | Sunday 08 March 2026 00:41:12 +0000 (0:00:00.174) 0:00:26.664 ********** 2026-03-08 00:41:18.504656 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.504667 | orchestrator | 2026-03-08 00:41:18.504679 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:18.504696 | orchestrator | Sunday 08 March 2026 00:41:12 +0000 (0:00:00.162) 0:00:26.827 ********** 2026-03-08 00:41:18.504743 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.504760 | orchestrator | 2026-03-08 00:41:18.504776 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:18.504792 | orchestrator | Sunday 08 March 2026 00:41:12 +0000 (0:00:00.190) 0:00:27.017 ********** 2026-03-08 00:41:18.504809 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.504826 | orchestrator | 2026-03-08 00:41:18.504845 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:18.504863 | orchestrator | Sunday 08 March 2026 00:41:12 +0000 (0:00:00.167) 0:00:27.184 ********** 2026-03-08 00:41:18.504880 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7) 2026-03-08 00:41:18.504898 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7) 2026-03-08 00:41:18.504914 | orchestrator | 2026-03-08 00:41:18.504932 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:18.504950 | orchestrator | Sunday 08 March 2026 00:41:13 +0000 (0:00:00.655) 0:00:27.840 ********** 2026-03-08 00:41:18.504968 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_875b6ffb-6cc0-40cb-be90-c8d29b416698) 2026-03-08 00:41:18.504984 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_875b6ffb-6cc0-40cb-be90-c8d29b416698) 2026-03-08 00:41:18.504997 | orchestrator | 2026-03-08 00:41:18.505009 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:18.505020 | orchestrator | Sunday 08 March 2026 00:41:13 +0000 (0:00:00.371) 0:00:28.211 ********** 2026-03-08 00:41:18.505030 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_09c0cde5-d1e2-470c-ab2a-905eda1e5751) 2026-03-08 00:41:18.505040 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_09c0cde5-d1e2-470c-ab2a-905eda1e5751) 2026-03-08 00:41:18.505049 | orchestrator | 2026-03-08 00:41:18.505059 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:18.505068 | orchestrator | Sunday 08 March 2026 00:41:14 +0000 (0:00:00.383) 0:00:28.595 ********** 2026-03-08 00:41:18.505078 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c3f7b7f4-f798-492f-86ac-7ce39be70087) 2026-03-08 00:41:18.505087 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c3f7b7f4-f798-492f-86ac-7ce39be70087) 2026-03-08 00:41:18.505097 | orchestrator | 2026-03-08 00:41:18.505153 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:18.505169 | orchestrator | Sunday 08 March 2026 00:41:14 +0000 (0:00:00.388) 0:00:28.984 ********** 2026-03-08 00:41:18.505187 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-08 00:41:18.505204 | orchestrator | 2026-03-08 00:41:18.505218 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:18.505259 | orchestrator | Sunday 08 March 2026 00:41:14 +0000 (0:00:00.279) 0:00:29.263 ********** 2026-03-08 00:41:18.505275 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-08 00:41:18.505291 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-08 00:41:18.505308 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-08 00:41:18.505349 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-08 00:41:18.505367 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-08 00:41:18.505382 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-08 00:41:18.505399 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-08 00:41:18.505416 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-08 00:41:18.505447 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-08 00:41:18.505457 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-08 00:41:18.505466 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-08 00:41:18.505493 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-08 00:41:18.505504 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-08 00:41:18.505521 | orchestrator | 2026-03-08 00:41:18.505538 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:18.505553 | orchestrator | Sunday 08 March 2026 00:41:15 +0000 (0:00:00.326) 0:00:29.589 ********** 2026-03-08 00:41:18.505569 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.505585 | orchestrator | 2026-03-08 00:41:18.505600 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:18.505616 | orchestrator | Sunday 08 March 2026 00:41:15 +0000 (0:00:00.166) 0:00:29.756 ********** 2026-03-08 00:41:18.505633 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.505649 | orchestrator | 2026-03-08 00:41:18.505663 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:18.505678 | orchestrator | Sunday 08 March 2026 00:41:15 +0000 (0:00:00.164) 0:00:29.920 ********** 2026-03-08 00:41:18.505700 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.505717 | orchestrator | 2026-03-08 00:41:18.505734 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:18.505750 | orchestrator | Sunday 08 March 2026 00:41:15 +0000 (0:00:00.161) 0:00:30.082 ********** 2026-03-08 00:41:18.505767 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.505785 | orchestrator | 2026-03-08 00:41:18.505802 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:18.505816 | orchestrator | Sunday 08 March 2026 00:41:15 +0000 (0:00:00.161) 0:00:30.244 ********** 2026-03-08 00:41:18.505831 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.505845 | orchestrator | 2026-03-08 00:41:18.505861 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:18.505876 | orchestrator | Sunday 08 March 2026 00:41:16 +0000 (0:00:00.157) 0:00:30.401 ********** 2026-03-08 00:41:18.505893 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.505909 | orchestrator | 2026-03-08 00:41:18.505924 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:18.505939 | orchestrator | Sunday 08 March 2026 00:41:16 +0000 (0:00:00.461) 0:00:30.862 ********** 2026-03-08 00:41:18.505954 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.505968 | orchestrator | 2026-03-08 00:41:18.505983 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:18.505998 | orchestrator | Sunday 08 March 2026 00:41:16 +0000 (0:00:00.187) 0:00:31.049 ********** 2026-03-08 00:41:18.506012 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.506117 | orchestrator | 2026-03-08 00:41:18.506135 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:18.506150 | orchestrator | Sunday 08 March 2026 00:41:16 +0000 (0:00:00.176) 0:00:31.226 ********** 2026-03-08 00:41:18.506165 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-08 00:41:18.506181 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-08 00:41:18.506197 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-08 00:41:18.506211 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-08 00:41:18.506226 | orchestrator | 2026-03-08 00:41:18.506242 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:18.506258 | orchestrator | Sunday 08 March 2026 00:41:17 +0000 (0:00:00.614) 0:00:31.840 ********** 2026-03-08 00:41:18.506275 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.506290 | orchestrator | 2026-03-08 00:41:18.506352 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:18.506370 | orchestrator | Sunday 08 March 2026 00:41:17 +0000 (0:00:00.230) 0:00:32.070 ********** 2026-03-08 00:41:18.506386 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.506402 | orchestrator | 2026-03-08 00:41:18.506419 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:18.506435 | orchestrator | Sunday 08 March 2026 00:41:18 +0000 (0:00:00.248) 0:00:32.319 ********** 2026-03-08 00:41:18.506451 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.506469 | orchestrator | 2026-03-08 00:41:18.506485 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:18.506502 | orchestrator | Sunday 08 March 2026 00:41:18 +0000 (0:00:00.267) 0:00:32.586 ********** 2026-03-08 00:41:18.506519 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.506534 | orchestrator | 2026-03-08 00:41:18.506572 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-08 00:41:22.212455 | orchestrator | Sunday 08 March 2026 00:41:18 +0000 (0:00:00.232) 0:00:32.819 ********** 2026-03-08 00:41:22.212556 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-08 00:41:22.212571 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-08 00:41:22.212583 | orchestrator | 2026-03-08 00:41:22.212596 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-08 00:41:22.212607 | orchestrator | Sunday 08 March 2026 00:41:18 +0000 (0:00:00.183) 0:00:33.002 ********** 2026-03-08 00:41:22.212618 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:22.212629 | orchestrator | 2026-03-08 00:41:22.212640 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-08 00:41:22.212651 | orchestrator | Sunday 08 March 2026 00:41:18 +0000 (0:00:00.120) 0:00:33.123 ********** 2026-03-08 00:41:22.212662 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:22.212673 | orchestrator | 2026-03-08 00:41:22.212683 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-08 00:41:22.212694 | orchestrator | Sunday 08 March 2026 00:41:18 +0000 (0:00:00.116) 0:00:33.239 ********** 2026-03-08 00:41:22.212705 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:22.212715 | orchestrator | 2026-03-08 00:41:22.212726 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-08 00:41:22.212737 | orchestrator | Sunday 08 March 2026 00:41:19 +0000 (0:00:00.273) 0:00:33.513 ********** 2026-03-08 00:41:22.212748 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:41:22.212760 | orchestrator | 2026-03-08 00:41:22.212771 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-08 00:41:22.212782 | orchestrator | Sunday 08 March 2026 00:41:19 +0000 (0:00:00.112) 0:00:33.625 ********** 2026-03-08 00:41:22.212793 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9742d483-d5c0-528b-aa0f-657894200b45'}}) 2026-03-08 00:41:22.212805 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e5322502-cf2a-5eb6-8fcb-1a734f718f57'}}) 2026-03-08 00:41:22.212816 | orchestrator | 2026-03-08 00:41:22.212827 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-08 00:41:22.212838 | orchestrator | Sunday 08 March 2026 00:41:19 +0000 (0:00:00.160) 0:00:33.785 ********** 2026-03-08 00:41:22.212849 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9742d483-d5c0-528b-aa0f-657894200b45'}})  2026-03-08 00:41:22.212862 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e5322502-cf2a-5eb6-8fcb-1a734f718f57'}})  2026-03-08 00:41:22.212872 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:22.212883 | orchestrator | 2026-03-08 00:41:22.212894 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-08 00:41:22.212905 | orchestrator | Sunday 08 March 2026 00:41:19 +0000 (0:00:00.146) 0:00:33.932 ********** 2026-03-08 00:41:22.212916 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9742d483-d5c0-528b-aa0f-657894200b45'}})  2026-03-08 00:41:22.212953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e5322502-cf2a-5eb6-8fcb-1a734f718f57'}})  2026-03-08 00:41:22.212966 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:22.212979 | orchestrator | 2026-03-08 00:41:22.212992 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-08 00:41:22.213004 | orchestrator | Sunday 08 March 2026 00:41:19 +0000 (0:00:00.148) 0:00:34.081 ********** 2026-03-08 00:41:22.213017 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9742d483-d5c0-528b-aa0f-657894200b45'}})  2026-03-08 00:41:22.213030 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e5322502-cf2a-5eb6-8fcb-1a734f718f57'}})  2026-03-08 00:41:22.213042 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:22.213055 | orchestrator | 2026-03-08 00:41:22.213067 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-08 00:41:22.213080 | orchestrator | Sunday 08 March 2026 00:41:19 +0000 (0:00:00.140) 0:00:34.222 ********** 2026-03-08 00:41:22.213092 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:41:22.213105 | orchestrator | 2026-03-08 00:41:22.213118 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-08 00:41:22.213130 | orchestrator | Sunday 08 March 2026 00:41:20 +0000 (0:00:00.131) 0:00:34.354 ********** 2026-03-08 00:41:22.213142 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:41:22.213155 | orchestrator | 2026-03-08 00:41:22.213185 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-08 00:41:22.213198 | orchestrator | Sunday 08 March 2026 00:41:20 +0000 (0:00:00.141) 0:00:34.495 ********** 2026-03-08 00:41:22.213211 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:22.213224 | orchestrator | 2026-03-08 00:41:22.213236 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-08 00:41:22.213249 | orchestrator | Sunday 08 March 2026 00:41:20 +0000 (0:00:00.128) 0:00:34.624 ********** 2026-03-08 00:41:22.213262 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:22.213275 | orchestrator | 2026-03-08 00:41:22.213287 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-08 00:41:22.213300 | orchestrator | Sunday 08 March 2026 00:41:20 +0000 (0:00:00.119) 0:00:34.743 ********** 2026-03-08 00:41:22.213312 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:22.213343 | orchestrator | 2026-03-08 00:41:22.213355 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-08 00:41:22.213365 | orchestrator | Sunday 08 March 2026 00:41:20 +0000 (0:00:00.137) 0:00:34.881 ********** 2026-03-08 00:41:22.213376 | orchestrator | ok: [testbed-node-5] => { 2026-03-08 00:41:22.213387 | orchestrator |  "ceph_osd_devices": { 2026-03-08 00:41:22.213399 | orchestrator |  "sdb": { 2026-03-08 00:41:22.213425 | orchestrator |  "osd_lvm_uuid": "9742d483-d5c0-528b-aa0f-657894200b45" 2026-03-08 00:41:22.213437 | orchestrator |  }, 2026-03-08 00:41:22.213449 | orchestrator |  "sdc": { 2026-03-08 00:41:22.213460 | orchestrator |  "osd_lvm_uuid": "e5322502-cf2a-5eb6-8fcb-1a734f718f57" 2026-03-08 00:41:22.213471 | orchestrator |  } 2026-03-08 00:41:22.213482 | orchestrator |  } 2026-03-08 00:41:22.213494 | orchestrator | } 2026-03-08 00:41:22.213505 | orchestrator | 2026-03-08 00:41:22.213516 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-08 00:41:22.213527 | orchestrator | Sunday 08 March 2026 00:41:20 +0000 (0:00:00.127) 0:00:35.009 ********** 2026-03-08 00:41:22.213537 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:22.213548 | orchestrator | 2026-03-08 00:41:22.213559 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-08 00:41:22.213570 | orchestrator | Sunday 08 March 2026 00:41:20 +0000 (0:00:00.268) 0:00:35.277 ********** 2026-03-08 00:41:22.213580 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:22.213601 | orchestrator | 2026-03-08 00:41:22.213612 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-08 00:41:22.213623 | orchestrator | Sunday 08 March 2026 00:41:21 +0000 (0:00:00.117) 0:00:35.395 ********** 2026-03-08 00:41:22.213633 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:22.213644 | orchestrator | 2026-03-08 00:41:22.213655 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-08 00:41:22.213666 | orchestrator | Sunday 08 March 2026 00:41:21 +0000 (0:00:00.123) 0:00:35.518 ********** 2026-03-08 00:41:22.213677 | orchestrator | changed: [testbed-node-5] => { 2026-03-08 00:41:22.213688 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-08 00:41:22.213699 | orchestrator |  "ceph_osd_devices": { 2026-03-08 00:41:22.213709 | orchestrator |  "sdb": { 2026-03-08 00:41:22.213720 | orchestrator |  "osd_lvm_uuid": "9742d483-d5c0-528b-aa0f-657894200b45" 2026-03-08 00:41:22.213731 | orchestrator |  }, 2026-03-08 00:41:22.213742 | orchestrator |  "sdc": { 2026-03-08 00:41:22.213753 | orchestrator |  "osd_lvm_uuid": "e5322502-cf2a-5eb6-8fcb-1a734f718f57" 2026-03-08 00:41:22.213764 | orchestrator |  } 2026-03-08 00:41:22.213775 | orchestrator |  }, 2026-03-08 00:41:22.213786 | orchestrator |  "lvm_volumes": [ 2026-03-08 00:41:22.213796 | orchestrator |  { 2026-03-08 00:41:22.213807 | orchestrator |  "data": "osd-block-9742d483-d5c0-528b-aa0f-657894200b45", 2026-03-08 00:41:22.213818 | orchestrator |  "data_vg": "ceph-9742d483-d5c0-528b-aa0f-657894200b45" 2026-03-08 00:41:22.213829 | orchestrator |  }, 2026-03-08 00:41:22.213840 | orchestrator |  { 2026-03-08 00:41:22.213851 | orchestrator |  "data": "osd-block-e5322502-cf2a-5eb6-8fcb-1a734f718f57", 2026-03-08 00:41:22.213867 | orchestrator |  "data_vg": "ceph-e5322502-cf2a-5eb6-8fcb-1a734f718f57" 2026-03-08 00:41:22.213879 | orchestrator |  } 2026-03-08 00:41:22.213890 | orchestrator |  ] 2026-03-08 00:41:22.213905 | orchestrator |  } 2026-03-08 00:41:22.213917 | orchestrator | } 2026-03-08 00:41:22.213927 | orchestrator | 2026-03-08 00:41:22.213938 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-08 00:41:22.213949 | orchestrator | Sunday 08 March 2026 00:41:21 +0000 (0:00:00.188) 0:00:35.707 ********** 2026-03-08 00:41:22.213960 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-08 00:41:22.213971 | orchestrator | 2026-03-08 00:41:22.213982 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:41:22.213993 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-08 00:41:22.214005 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-08 00:41:22.214080 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-08 00:41:22.214096 | orchestrator | 2026-03-08 00:41:22.214107 | orchestrator | 2026-03-08 00:41:22.214118 | orchestrator | 2026-03-08 00:41:22.214129 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:41:22.214140 | orchestrator | Sunday 08 March 2026 00:41:22 +0000 (0:00:00.803) 0:00:36.511 ********** 2026-03-08 00:41:22.214151 | orchestrator | =============================================================================== 2026-03-08 00:41:22.214162 | orchestrator | Write configuration file ------------------------------------------------ 3.42s 2026-03-08 00:41:22.214173 | orchestrator | Add known links to the list of available block devices ------------------ 1.19s 2026-03-08 00:41:22.214184 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.02s 2026-03-08 00:41:22.214194 | orchestrator | Add known partitions to the list of available block devices ------------- 1.00s 2026-03-08 00:41:22.214224 | orchestrator | Add known partitions to the list of available block devices ------------- 0.83s 2026-03-08 00:41:22.214235 | orchestrator | Add known partitions to the list of available block devices ------------- 0.80s 2026-03-08 00:41:22.214246 | orchestrator | Print configuration data ------------------------------------------------ 0.73s 2026-03-08 00:41:22.214257 | orchestrator | Get initial list of available block devices ----------------------------- 0.71s 2026-03-08 00:41:22.214268 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-03-08 00:41:22.214279 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2026-03-08 00:41:22.214289 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2026-03-08 00:41:22.214300 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.55s 2026-03-08 00:41:22.214311 | orchestrator | Set DB devices config data ---------------------------------------------- 0.53s 2026-03-08 00:41:22.214362 | orchestrator | Add known links to the list of available block devices ------------------ 0.53s 2026-03-08 00:41:22.466880 | orchestrator | Print WAL devices ------------------------------------------------------- 0.52s 2026-03-08 00:41:22.466966 | orchestrator | Add known links to the list of available block devices ------------------ 0.51s 2026-03-08 00:41:22.466976 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.49s 2026-03-08 00:41:22.466983 | orchestrator | Add known partitions to the list of available block devices ------------- 0.48s 2026-03-08 00:41:22.466991 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.47s 2026-03-08 00:41:22.466998 | orchestrator | Add known partitions to the list of available block devices ------------- 0.46s 2026-03-08 00:41:44.920812 | orchestrator | 2026-03-08 00:41:44 | INFO  | Task 2fd6ef40-7cb9-47d5-ab54-b871176998ac (sync inventory) is running in background. Output coming soon. 2026-03-08 00:42:10.274464 | orchestrator | 2026-03-08 00:41:46 | INFO  | Starting group_vars file reorganization 2026-03-08 00:42:10.274559 | orchestrator | 2026-03-08 00:41:46 | INFO  | Moved 0 file(s) to their respective directories 2026-03-08 00:42:10.274570 | orchestrator | 2026-03-08 00:41:46 | INFO  | Group_vars file reorganization completed 2026-03-08 00:42:10.274578 | orchestrator | 2026-03-08 00:41:49 | INFO  | Starting variable preparation from inventory 2026-03-08 00:42:10.274585 | orchestrator | 2026-03-08 00:41:51 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-08 00:42:10.274592 | orchestrator | 2026-03-08 00:41:51 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-08 00:42:10.274599 | orchestrator | 2026-03-08 00:41:51 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-08 00:42:10.274606 | orchestrator | 2026-03-08 00:41:51 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-08 00:42:10.274613 | orchestrator | 2026-03-08 00:41:51 | INFO  | Variable preparation completed 2026-03-08 00:42:10.274620 | orchestrator | 2026-03-08 00:41:53 | INFO  | Starting inventory overwrite handling 2026-03-08 00:42:10.274626 | orchestrator | 2026-03-08 00:41:53 | INFO  | Handling group overwrites in 99-overwrite 2026-03-08 00:42:10.274633 | orchestrator | 2026-03-08 00:41:53 | INFO  | Removing group frr:children from 60-generic 2026-03-08 00:42:10.274640 | orchestrator | 2026-03-08 00:41:53 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-08 00:42:10.274667 | orchestrator | 2026-03-08 00:41:53 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-08 00:42:10.274675 | orchestrator | 2026-03-08 00:41:53 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-08 00:42:10.274682 | orchestrator | 2026-03-08 00:41:53 | INFO  | Handling group overwrites in 20-roles 2026-03-08 00:42:10.274689 | orchestrator | 2026-03-08 00:41:53 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-08 00:42:10.274716 | orchestrator | 2026-03-08 00:41:53 | INFO  | Removed 5 group(s) in total 2026-03-08 00:42:10.274723 | orchestrator | 2026-03-08 00:41:53 | INFO  | Inventory overwrite handling completed 2026-03-08 00:42:10.274729 | orchestrator | 2026-03-08 00:41:54 | INFO  | Starting merge of inventory files 2026-03-08 00:42:10.274736 | orchestrator | 2026-03-08 00:41:54 | INFO  | Inventory files merged successfully 2026-03-08 00:42:10.274742 | orchestrator | 2026-03-08 00:41:58 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-08 00:42:10.274749 | orchestrator | 2026-03-08 00:42:08 | INFO  | Successfully wrote ClusterShell configuration 2026-03-08 00:42:10.274756 | orchestrator | [master 625dee0] 2026-03-08-00-42 2026-03-08 00:42:10.274763 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-08 00:42:11.991701 | orchestrator | 2026-03-08 00:42:11 | INFO  | Task 053902a7-2224-48ab-a21c-96e5c86dc73a (ceph-create-lvm-devices) was prepared for execution. 2026-03-08 00:42:11.991780 | orchestrator | 2026-03-08 00:42:11 | INFO  | It takes a moment until task 053902a7-2224-48ab-a21c-96e5c86dc73a (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-08 00:42:22.891596 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-08 00:42:22.891670 | orchestrator | 2.16.14 2026-03-08 00:42:22.891682 | orchestrator | 2026-03-08 00:42:22.891692 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-08 00:42:22.891700 | orchestrator | 2026-03-08 00:42:22.891708 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-08 00:42:22.891716 | orchestrator | Sunday 08 March 2026 00:42:16 +0000 (0:00:00.270) 0:00:00.270 ********** 2026-03-08 00:42:22.891724 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-08 00:42:22.891732 | orchestrator | 2026-03-08 00:42:22.891740 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-08 00:42:22.891748 | orchestrator | Sunday 08 March 2026 00:42:16 +0000 (0:00:00.233) 0:00:00.503 ********** 2026-03-08 00:42:22.891756 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:42:22.891763 | orchestrator | 2026-03-08 00:42:22.891771 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:22.891780 | orchestrator | Sunday 08 March 2026 00:42:16 +0000 (0:00:00.200) 0:00:00.704 ********** 2026-03-08 00:42:22.891788 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-08 00:42:22.891795 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-08 00:42:22.891803 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-08 00:42:22.891811 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-08 00:42:22.891819 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-08 00:42:22.891826 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-08 00:42:22.891834 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-08 00:42:22.891842 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-08 00:42:22.891849 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-08 00:42:22.891857 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-08 00:42:22.891865 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-08 00:42:22.891872 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-08 00:42:22.891880 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-08 00:42:22.891905 | orchestrator | 2026-03-08 00:42:22.891914 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:22.891921 | orchestrator | Sunday 08 March 2026 00:42:17 +0000 (0:00:00.441) 0:00:01.146 ********** 2026-03-08 00:42:22.891929 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:22.891937 | orchestrator | 2026-03-08 00:42:22.891945 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:22.891952 | orchestrator | Sunday 08 March 2026 00:42:17 +0000 (0:00:00.198) 0:00:01.344 ********** 2026-03-08 00:42:22.891960 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:22.891968 | orchestrator | 2026-03-08 00:42:22.891976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:22.891983 | orchestrator | Sunday 08 March 2026 00:42:17 +0000 (0:00:00.208) 0:00:01.553 ********** 2026-03-08 00:42:22.891991 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:22.891999 | orchestrator | 2026-03-08 00:42:22.892006 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:22.892014 | orchestrator | Sunday 08 March 2026 00:42:17 +0000 (0:00:00.169) 0:00:01.723 ********** 2026-03-08 00:42:22.892021 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:22.892029 | orchestrator | 2026-03-08 00:42:22.892037 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:22.892045 | orchestrator | Sunday 08 March 2026 00:42:17 +0000 (0:00:00.188) 0:00:01.911 ********** 2026-03-08 00:42:22.892052 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:22.892060 | orchestrator | 2026-03-08 00:42:22.892067 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:22.892075 | orchestrator | Sunday 08 March 2026 00:42:17 +0000 (0:00:00.181) 0:00:02.093 ********** 2026-03-08 00:42:22.892083 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:22.892090 | orchestrator | 2026-03-08 00:42:22.892098 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:22.892106 | orchestrator | Sunday 08 March 2026 00:42:18 +0000 (0:00:00.193) 0:00:02.287 ********** 2026-03-08 00:42:22.892113 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:22.892121 | orchestrator | 2026-03-08 00:42:22.892129 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:22.892137 | orchestrator | Sunday 08 March 2026 00:42:18 +0000 (0:00:00.209) 0:00:02.496 ********** 2026-03-08 00:42:22.892144 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:22.892152 | orchestrator | 2026-03-08 00:42:22.892159 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:22.892169 | orchestrator | Sunday 08 March 2026 00:42:18 +0000 (0:00:00.218) 0:00:02.714 ********** 2026-03-08 00:42:22.892178 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6) 2026-03-08 00:42:22.892188 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6) 2026-03-08 00:42:22.892197 | orchestrator | 2026-03-08 00:42:22.892206 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:22.892227 | orchestrator | Sunday 08 March 2026 00:42:19 +0000 (0:00:00.428) 0:00:03.143 ********** 2026-03-08 00:42:22.892238 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9d9faa04-2f3c-436d-9a5f-1631de10dde0) 2026-03-08 00:42:22.892246 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9d9faa04-2f3c-436d-9a5f-1631de10dde0) 2026-03-08 00:42:22.892255 | orchestrator | 2026-03-08 00:42:22.892284 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:22.892294 | orchestrator | Sunday 08 March 2026 00:42:19 +0000 (0:00:00.531) 0:00:03.674 ********** 2026-03-08 00:42:22.892303 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_13127977-6a78-466e-81ef-45b79edafbaf) 2026-03-08 00:42:22.892311 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_13127977-6a78-466e-81ef-45b79edafbaf) 2026-03-08 00:42:22.892326 | orchestrator | 2026-03-08 00:42:22.892336 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:22.892344 | orchestrator | Sunday 08 March 2026 00:42:20 +0000 (0:00:00.512) 0:00:04.186 ********** 2026-03-08 00:42:22.892353 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4f1e8e18-dbd4-42e7-a856-4b9aa5f72ffe) 2026-03-08 00:42:22.892362 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4f1e8e18-dbd4-42e7-a856-4b9aa5f72ffe) 2026-03-08 00:42:22.892371 | orchestrator | 2026-03-08 00:42:22.892380 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:22.892389 | orchestrator | Sunday 08 March 2026 00:42:20 +0000 (0:00:00.716) 0:00:04.904 ********** 2026-03-08 00:42:22.892398 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-08 00:42:22.892406 | orchestrator | 2026-03-08 00:42:22.892416 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:22.892426 | orchestrator | Sunday 08 March 2026 00:42:21 +0000 (0:00:00.308) 0:00:05.212 ********** 2026-03-08 00:42:22.892440 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-08 00:42:22.892453 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-08 00:42:22.892467 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-08 00:42:22.892479 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-08 00:42:22.892492 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-08 00:42:22.892505 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-08 00:42:22.892519 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-08 00:42:22.892532 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-08 00:42:22.892545 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-08 00:42:22.892559 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-08 00:42:22.892573 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-08 00:42:22.892599 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-08 00:42:22.892607 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-08 00:42:22.892615 | orchestrator | 2026-03-08 00:42:22.892623 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:22.892631 | orchestrator | Sunday 08 March 2026 00:42:21 +0000 (0:00:00.374) 0:00:05.587 ********** 2026-03-08 00:42:22.892639 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:22.892647 | orchestrator | 2026-03-08 00:42:22.892655 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:22.892666 | orchestrator | Sunday 08 March 2026 00:42:21 +0000 (0:00:00.189) 0:00:05.776 ********** 2026-03-08 00:42:22.892679 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:22.892692 | orchestrator | 2026-03-08 00:42:22.892704 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:22.892717 | orchestrator | Sunday 08 March 2026 00:42:21 +0000 (0:00:00.242) 0:00:06.019 ********** 2026-03-08 00:42:22.892731 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:22.892745 | orchestrator | 2026-03-08 00:42:22.892759 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:22.892772 | orchestrator | Sunday 08 March 2026 00:42:22 +0000 (0:00:00.183) 0:00:06.203 ********** 2026-03-08 00:42:22.892782 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:22.892796 | orchestrator | 2026-03-08 00:42:22.892804 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:22.892812 | orchestrator | Sunday 08 March 2026 00:42:22 +0000 (0:00:00.201) 0:00:06.404 ********** 2026-03-08 00:42:22.892820 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:22.892827 | orchestrator | 2026-03-08 00:42:22.892835 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:22.892842 | orchestrator | Sunday 08 March 2026 00:42:22 +0000 (0:00:00.196) 0:00:06.600 ********** 2026-03-08 00:42:22.892850 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:22.892857 | orchestrator | 2026-03-08 00:42:22.892865 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:22.892873 | orchestrator | Sunday 08 March 2026 00:42:22 +0000 (0:00:00.200) 0:00:06.801 ********** 2026-03-08 00:42:22.892880 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:22.892888 | orchestrator | 2026-03-08 00:42:22.892902 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:30.493791 | orchestrator | Sunday 08 March 2026 00:42:22 +0000 (0:00:00.221) 0:00:07.022 ********** 2026-03-08 00:42:30.493889 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:30.493903 | orchestrator | 2026-03-08 00:42:30.493915 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:30.493926 | orchestrator | Sunday 08 March 2026 00:42:23 +0000 (0:00:00.221) 0:00:07.244 ********** 2026-03-08 00:42:30.493936 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-08 00:42:30.493948 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-08 00:42:30.493959 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-08 00:42:30.493969 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-08 00:42:30.493980 | orchestrator | 2026-03-08 00:42:30.493990 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:30.494001 | orchestrator | Sunday 08 March 2026 00:42:24 +0000 (0:00:01.027) 0:00:08.272 ********** 2026-03-08 00:42:30.494011 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:30.494082 | orchestrator | 2026-03-08 00:42:30.494094 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:30.494105 | orchestrator | Sunday 08 March 2026 00:42:24 +0000 (0:00:00.195) 0:00:08.467 ********** 2026-03-08 00:42:30.494116 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:30.494127 | orchestrator | 2026-03-08 00:42:30.494139 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:30.494151 | orchestrator | Sunday 08 March 2026 00:42:24 +0000 (0:00:00.218) 0:00:08.686 ********** 2026-03-08 00:42:30.494162 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:30.494173 | orchestrator | 2026-03-08 00:42:30.494185 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:30.494196 | orchestrator | Sunday 08 March 2026 00:42:24 +0000 (0:00:00.199) 0:00:08.885 ********** 2026-03-08 00:42:30.494207 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:30.494219 | orchestrator | 2026-03-08 00:42:30.494230 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-08 00:42:30.494242 | orchestrator | Sunday 08 March 2026 00:42:24 +0000 (0:00:00.209) 0:00:09.095 ********** 2026-03-08 00:42:30.494275 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:30.494287 | orchestrator | 2026-03-08 00:42:30.494297 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-08 00:42:30.494308 | orchestrator | Sunday 08 March 2026 00:42:25 +0000 (0:00:00.128) 0:00:09.224 ********** 2026-03-08 00:42:30.494318 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd02f715b-f6fc-5dd9-afa3-4d404d1973db'}}) 2026-03-08 00:42:30.494329 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '06971c7f-d1d9-5519-989d-752a08544c4e'}}) 2026-03-08 00:42:30.494341 | orchestrator | 2026-03-08 00:42:30.494351 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-08 00:42:30.494386 | orchestrator | Sunday 08 March 2026 00:42:25 +0000 (0:00:00.210) 0:00:09.435 ********** 2026-03-08 00:42:30.494395 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d02f715b-f6fc-5dd9-afa3-4d404d1973db', 'data_vg': 'ceph-d02f715b-f6fc-5dd9-afa3-4d404d1973db'}) 2026-03-08 00:42:30.494404 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-06971c7f-d1d9-5519-989d-752a08544c4e', 'data_vg': 'ceph-06971c7f-d1d9-5519-989d-752a08544c4e'}) 2026-03-08 00:42:30.494411 | orchestrator | 2026-03-08 00:42:30.494419 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-08 00:42:30.494437 | orchestrator | Sunday 08 March 2026 00:42:27 +0000 (0:00:01.937) 0:00:11.373 ********** 2026-03-08 00:42:30.494445 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d02f715b-f6fc-5dd9-afa3-4d404d1973db', 'data_vg': 'ceph-d02f715b-f6fc-5dd9-afa3-4d404d1973db'})  2026-03-08 00:42:30.494455 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-06971c7f-d1d9-5519-989d-752a08544c4e', 'data_vg': 'ceph-06971c7f-d1d9-5519-989d-752a08544c4e'})  2026-03-08 00:42:30.494465 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:30.494480 | orchestrator | 2026-03-08 00:42:30.494493 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-08 00:42:30.494503 | orchestrator | Sunday 08 March 2026 00:42:27 +0000 (0:00:00.156) 0:00:11.529 ********** 2026-03-08 00:42:30.494513 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d02f715b-f6fc-5dd9-afa3-4d404d1973db', 'data_vg': 'ceph-d02f715b-f6fc-5dd9-afa3-4d404d1973db'}) 2026-03-08 00:42:30.494524 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-06971c7f-d1d9-5519-989d-752a08544c4e', 'data_vg': 'ceph-06971c7f-d1d9-5519-989d-752a08544c4e'}) 2026-03-08 00:42:30.494534 | orchestrator | 2026-03-08 00:42:30.494543 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-08 00:42:30.494553 | orchestrator | Sunday 08 March 2026 00:42:28 +0000 (0:00:01.380) 0:00:12.910 ********** 2026-03-08 00:42:30.494564 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d02f715b-f6fc-5dd9-afa3-4d404d1973db', 'data_vg': 'ceph-d02f715b-f6fc-5dd9-afa3-4d404d1973db'})  2026-03-08 00:42:30.494575 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-06971c7f-d1d9-5519-989d-752a08544c4e', 'data_vg': 'ceph-06971c7f-d1d9-5519-989d-752a08544c4e'})  2026-03-08 00:42:30.494585 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:30.494595 | orchestrator | 2026-03-08 00:42:30.494606 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-08 00:42:30.494615 | orchestrator | Sunday 08 March 2026 00:42:28 +0000 (0:00:00.130) 0:00:13.040 ********** 2026-03-08 00:42:30.494643 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:30.494653 | orchestrator | 2026-03-08 00:42:30.494664 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-08 00:42:30.494674 | orchestrator | Sunday 08 March 2026 00:42:29 +0000 (0:00:00.110) 0:00:13.151 ********** 2026-03-08 00:42:30.494682 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d02f715b-f6fc-5dd9-afa3-4d404d1973db', 'data_vg': 'ceph-d02f715b-f6fc-5dd9-afa3-4d404d1973db'})  2026-03-08 00:42:30.494692 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-06971c7f-d1d9-5519-989d-752a08544c4e', 'data_vg': 'ceph-06971c7f-d1d9-5519-989d-752a08544c4e'})  2026-03-08 00:42:30.494702 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:30.494712 | orchestrator | 2026-03-08 00:42:30.494723 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-08 00:42:30.494733 | orchestrator | Sunday 08 March 2026 00:42:29 +0000 (0:00:00.274) 0:00:13.425 ********** 2026-03-08 00:42:30.494742 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:30.494751 | orchestrator | 2026-03-08 00:42:30.494760 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-08 00:42:30.494770 | orchestrator | Sunday 08 March 2026 00:42:29 +0000 (0:00:00.137) 0:00:13.563 ********** 2026-03-08 00:42:30.494784 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d02f715b-f6fc-5dd9-afa3-4d404d1973db', 'data_vg': 'ceph-d02f715b-f6fc-5dd9-afa3-4d404d1973db'})  2026-03-08 00:42:30.494790 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-06971c7f-d1d9-5519-989d-752a08544c4e', 'data_vg': 'ceph-06971c7f-d1d9-5519-989d-752a08544c4e'})  2026-03-08 00:42:30.494795 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:30.494800 | orchestrator | 2026-03-08 00:42:30.494806 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-08 00:42:30.494811 | orchestrator | Sunday 08 March 2026 00:42:29 +0000 (0:00:00.140) 0:00:13.704 ********** 2026-03-08 00:42:30.494816 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:30.494822 | orchestrator | 2026-03-08 00:42:30.494827 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-08 00:42:30.494832 | orchestrator | Sunday 08 March 2026 00:42:29 +0000 (0:00:00.129) 0:00:13.833 ********** 2026-03-08 00:42:30.494838 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d02f715b-f6fc-5dd9-afa3-4d404d1973db', 'data_vg': 'ceph-d02f715b-f6fc-5dd9-afa3-4d404d1973db'})  2026-03-08 00:42:30.494843 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-06971c7f-d1d9-5519-989d-752a08544c4e', 'data_vg': 'ceph-06971c7f-d1d9-5519-989d-752a08544c4e'})  2026-03-08 00:42:30.494849 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:30.494854 | orchestrator | 2026-03-08 00:42:30.494859 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-08 00:42:30.494865 | orchestrator | Sunday 08 March 2026 00:42:29 +0000 (0:00:00.131) 0:00:13.964 ********** 2026-03-08 00:42:30.494870 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:42:30.494876 | orchestrator | 2026-03-08 00:42:30.494881 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-08 00:42:30.494887 | orchestrator | Sunday 08 March 2026 00:42:29 +0000 (0:00:00.120) 0:00:14.085 ********** 2026-03-08 00:42:30.494892 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d02f715b-f6fc-5dd9-afa3-4d404d1973db', 'data_vg': 'ceph-d02f715b-f6fc-5dd9-afa3-4d404d1973db'})  2026-03-08 00:42:30.494898 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-06971c7f-d1d9-5519-989d-752a08544c4e', 'data_vg': 'ceph-06971c7f-d1d9-5519-989d-752a08544c4e'})  2026-03-08 00:42:30.494904 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:30.494909 | orchestrator | 2026-03-08 00:42:30.494915 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-08 00:42:30.494920 | orchestrator | Sunday 08 March 2026 00:42:30 +0000 (0:00:00.137) 0:00:14.222 ********** 2026-03-08 00:42:30.494925 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d02f715b-f6fc-5dd9-afa3-4d404d1973db', 'data_vg': 'ceph-d02f715b-f6fc-5dd9-afa3-4d404d1973db'})  2026-03-08 00:42:30.494939 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-06971c7f-d1d9-5519-989d-752a08544c4e', 'data_vg': 'ceph-06971c7f-d1d9-5519-989d-752a08544c4e'})  2026-03-08 00:42:30.494944 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:30.494950 | orchestrator | 2026-03-08 00:42:30.494955 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-08 00:42:30.494961 | orchestrator | Sunday 08 March 2026 00:42:30 +0000 (0:00:00.144) 0:00:14.367 ********** 2026-03-08 00:42:30.494966 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d02f715b-f6fc-5dd9-afa3-4d404d1973db', 'data_vg': 'ceph-d02f715b-f6fc-5dd9-afa3-4d404d1973db'})  2026-03-08 00:42:30.494971 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-06971c7f-d1d9-5519-989d-752a08544c4e', 'data_vg': 'ceph-06971c7f-d1d9-5519-989d-752a08544c4e'})  2026-03-08 00:42:30.494977 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:30.494982 | orchestrator | 2026-03-08 00:42:30.494987 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-08 00:42:30.494993 | orchestrator | Sunday 08 March 2026 00:42:30 +0000 (0:00:00.134) 0:00:14.501 ********** 2026-03-08 00:42:30.495003 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:30.495008 | orchestrator | 2026-03-08 00:42:30.495014 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-08 00:42:30.495024 | orchestrator | Sunday 08 March 2026 00:42:30 +0000 (0:00:00.121) 0:00:14.622 ********** 2026-03-08 00:42:36.301236 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:36.301337 | orchestrator | 2026-03-08 00:42:36.301347 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-08 00:42:36.301355 | orchestrator | Sunday 08 March 2026 00:42:30 +0000 (0:00:00.117) 0:00:14.740 ********** 2026-03-08 00:42:36.301362 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:36.301368 | orchestrator | 2026-03-08 00:42:36.301374 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-08 00:42:36.301381 | orchestrator | Sunday 08 March 2026 00:42:30 +0000 (0:00:00.129) 0:00:14.869 ********** 2026-03-08 00:42:36.301391 | orchestrator | ok: [testbed-node-3] => { 2026-03-08 00:42:36.301402 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-08 00:42:36.301414 | orchestrator | } 2026-03-08 00:42:36.301431 | orchestrator | 2026-03-08 00:42:36.301443 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-08 00:42:36.301454 | orchestrator | Sunday 08 March 2026 00:42:30 +0000 (0:00:00.235) 0:00:15.104 ********** 2026-03-08 00:42:36.301465 | orchestrator | ok: [testbed-node-3] => { 2026-03-08 00:42:36.301477 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-08 00:42:36.301487 | orchestrator | } 2026-03-08 00:42:36.301497 | orchestrator | 2026-03-08 00:42:36.301509 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-08 00:42:36.301521 | orchestrator | Sunday 08 March 2026 00:42:31 +0000 (0:00:00.139) 0:00:15.244 ********** 2026-03-08 00:42:36.301531 | orchestrator | ok: [testbed-node-3] => { 2026-03-08 00:42:36.301543 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-08 00:42:36.301555 | orchestrator | } 2026-03-08 00:42:36.301566 | orchestrator | 2026-03-08 00:42:36.301578 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-08 00:42:36.301588 | orchestrator | Sunday 08 March 2026 00:42:31 +0000 (0:00:00.122) 0:00:15.367 ********** 2026-03-08 00:42:36.301594 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:42:36.301601 | orchestrator | 2026-03-08 00:42:36.301607 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-08 00:42:36.301613 | orchestrator | Sunday 08 March 2026 00:42:31 +0000 (0:00:00.633) 0:00:16.001 ********** 2026-03-08 00:42:36.301620 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:42:36.301626 | orchestrator | 2026-03-08 00:42:36.301632 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-08 00:42:36.301638 | orchestrator | Sunday 08 March 2026 00:42:32 +0000 (0:00:00.485) 0:00:16.486 ********** 2026-03-08 00:42:36.301644 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:42:36.301650 | orchestrator | 2026-03-08 00:42:36.301657 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-08 00:42:36.301663 | orchestrator | Sunday 08 March 2026 00:42:32 +0000 (0:00:00.458) 0:00:16.945 ********** 2026-03-08 00:42:36.301669 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:42:36.301675 | orchestrator | 2026-03-08 00:42:36.301681 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-08 00:42:36.301688 | orchestrator | Sunday 08 March 2026 00:42:32 +0000 (0:00:00.125) 0:00:17.071 ********** 2026-03-08 00:42:36.301694 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:36.301700 | orchestrator | 2026-03-08 00:42:36.301706 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-08 00:42:36.301712 | orchestrator | Sunday 08 March 2026 00:42:33 +0000 (0:00:00.117) 0:00:17.188 ********** 2026-03-08 00:42:36.301718 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:36.301724 | orchestrator | 2026-03-08 00:42:36.301730 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-08 00:42:36.301754 | orchestrator | Sunday 08 March 2026 00:42:33 +0000 (0:00:00.101) 0:00:17.290 ********** 2026-03-08 00:42:36.301772 | orchestrator | ok: [testbed-node-3] => { 2026-03-08 00:42:36.301779 | orchestrator |  "vgs_report": { 2026-03-08 00:42:36.301785 | orchestrator |  "vg": [] 2026-03-08 00:42:36.301792 | orchestrator |  } 2026-03-08 00:42:36.301799 | orchestrator | } 2026-03-08 00:42:36.301806 | orchestrator | 2026-03-08 00:42:36.301814 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-08 00:42:36.301821 | orchestrator | Sunday 08 March 2026 00:42:33 +0000 (0:00:00.140) 0:00:17.431 ********** 2026-03-08 00:42:36.301828 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:36.301834 | orchestrator | 2026-03-08 00:42:36.301841 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-08 00:42:36.301848 | orchestrator | Sunday 08 March 2026 00:42:33 +0000 (0:00:00.130) 0:00:17.562 ********** 2026-03-08 00:42:36.301855 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:36.301862 | orchestrator | 2026-03-08 00:42:36.301870 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-08 00:42:36.301877 | orchestrator | Sunday 08 March 2026 00:42:33 +0000 (0:00:00.139) 0:00:17.701 ********** 2026-03-08 00:42:36.301884 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:36.301891 | orchestrator | 2026-03-08 00:42:36.301897 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-08 00:42:36.301903 | orchestrator | Sunday 08 March 2026 00:42:33 +0000 (0:00:00.260) 0:00:17.962 ********** 2026-03-08 00:42:36.301909 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:36.301915 | orchestrator | 2026-03-08 00:42:36.301922 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-08 00:42:36.301928 | orchestrator | Sunday 08 March 2026 00:42:33 +0000 (0:00:00.124) 0:00:18.087 ********** 2026-03-08 00:42:36.301934 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:36.301940 | orchestrator | 2026-03-08 00:42:36.301946 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-08 00:42:36.301952 | orchestrator | Sunday 08 March 2026 00:42:34 +0000 (0:00:00.132) 0:00:18.219 ********** 2026-03-08 00:42:36.301958 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:36.301964 | orchestrator | 2026-03-08 00:42:36.301970 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-08 00:42:36.301976 | orchestrator | Sunday 08 March 2026 00:42:34 +0000 (0:00:00.119) 0:00:18.338 ********** 2026-03-08 00:42:36.301982 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:36.301988 | orchestrator | 2026-03-08 00:42:36.301995 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-08 00:42:36.302001 | orchestrator | Sunday 08 March 2026 00:42:34 +0000 (0:00:00.134) 0:00:18.473 ********** 2026-03-08 00:42:36.302060 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:36.302069 | orchestrator | 2026-03-08 00:42:36.302075 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-08 00:42:36.302081 | orchestrator | Sunday 08 March 2026 00:42:34 +0000 (0:00:00.117) 0:00:18.591 ********** 2026-03-08 00:42:36.302087 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:36.302093 | orchestrator | 2026-03-08 00:42:36.302099 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-08 00:42:36.302105 | orchestrator | Sunday 08 March 2026 00:42:34 +0000 (0:00:00.118) 0:00:18.709 ********** 2026-03-08 00:42:36.302112 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:36.302118 | orchestrator | 2026-03-08 00:42:36.302124 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-08 00:42:36.302130 | orchestrator | Sunday 08 March 2026 00:42:34 +0000 (0:00:00.134) 0:00:18.844 ********** 2026-03-08 00:42:36.302136 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:36.302142 | orchestrator | 2026-03-08 00:42:36.302148 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-08 00:42:36.302154 | orchestrator | Sunday 08 March 2026 00:42:34 +0000 (0:00:00.115) 0:00:18.960 ********** 2026-03-08 00:42:36.302167 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:36.302177 | orchestrator | 2026-03-08 00:42:36.302187 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-08 00:42:36.302198 | orchestrator | Sunday 08 March 2026 00:42:34 +0000 (0:00:00.122) 0:00:19.082 ********** 2026-03-08 00:42:36.302207 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:36.302217 | orchestrator | 2026-03-08 00:42:36.302226 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-08 00:42:36.302234 | orchestrator | Sunday 08 March 2026 00:42:35 +0000 (0:00:00.131) 0:00:19.214 ********** 2026-03-08 00:42:36.302244 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:36.302290 | orchestrator | 2026-03-08 00:42:36.302300 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-08 00:42:36.302310 | orchestrator | Sunday 08 March 2026 00:42:35 +0000 (0:00:00.109) 0:00:19.323 ********** 2026-03-08 00:42:36.302320 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d02f715b-f6fc-5dd9-afa3-4d404d1973db', 'data_vg': 'ceph-d02f715b-f6fc-5dd9-afa3-4d404d1973db'})  2026-03-08 00:42:36.302332 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-06971c7f-d1d9-5519-989d-752a08544c4e', 'data_vg': 'ceph-06971c7f-d1d9-5519-989d-752a08544c4e'})  2026-03-08 00:42:36.302343 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:36.302352 | orchestrator | 2026-03-08 00:42:36.302362 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-08 00:42:36.302374 | orchestrator | Sunday 08 March 2026 00:42:35 +0000 (0:00:00.299) 0:00:19.623 ********** 2026-03-08 00:42:36.302384 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d02f715b-f6fc-5dd9-afa3-4d404d1973db', 'data_vg': 'ceph-d02f715b-f6fc-5dd9-afa3-4d404d1973db'})  2026-03-08 00:42:36.302395 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-06971c7f-d1d9-5519-989d-752a08544c4e', 'data_vg': 'ceph-06971c7f-d1d9-5519-989d-752a08544c4e'})  2026-03-08 00:42:36.302406 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:36.302416 | orchestrator | 2026-03-08 00:42:36.302428 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-08 00:42:36.302439 | orchestrator | Sunday 08 March 2026 00:42:35 +0000 (0:00:00.166) 0:00:19.790 ********** 2026-03-08 00:42:36.302449 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d02f715b-f6fc-5dd9-afa3-4d404d1973db', 'data_vg': 'ceph-d02f715b-f6fc-5dd9-afa3-4d404d1973db'})  2026-03-08 00:42:36.302460 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-06971c7f-d1d9-5519-989d-752a08544c4e', 'data_vg': 'ceph-06971c7f-d1d9-5519-989d-752a08544c4e'})  2026-03-08 00:42:36.302470 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:36.302482 | orchestrator | 2026-03-08 00:42:36.302493 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-08 00:42:36.302503 | orchestrator | Sunday 08 March 2026 00:42:35 +0000 (0:00:00.133) 0:00:19.923 ********** 2026-03-08 00:42:36.302514 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d02f715b-f6fc-5dd9-afa3-4d404d1973db', 'data_vg': 'ceph-d02f715b-f6fc-5dd9-afa3-4d404d1973db'})  2026-03-08 00:42:36.302524 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-06971c7f-d1d9-5519-989d-752a08544c4e', 'data_vg': 'ceph-06971c7f-d1d9-5519-989d-752a08544c4e'})  2026-03-08 00:42:36.302535 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:36.302546 | orchestrator | 2026-03-08 00:42:36.302557 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-08 00:42:36.302567 | orchestrator | Sunday 08 March 2026 00:42:35 +0000 (0:00:00.154) 0:00:20.077 ********** 2026-03-08 00:42:36.302578 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d02f715b-f6fc-5dd9-afa3-4d404d1973db', 'data_vg': 'ceph-d02f715b-f6fc-5dd9-afa3-4d404d1973db'})  2026-03-08 00:42:36.302588 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-06971c7f-d1d9-5519-989d-752a08544c4e', 'data_vg': 'ceph-06971c7f-d1d9-5519-989d-752a08544c4e'})  2026-03-08 00:42:36.302607 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:36.302617 | orchestrator | 2026-03-08 00:42:36.302626 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-08 00:42:36.302636 | orchestrator | Sunday 08 March 2026 00:42:36 +0000 (0:00:00.159) 0:00:20.237 ********** 2026-03-08 00:42:36.302656 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d02f715b-f6fc-5dd9-afa3-4d404d1973db', 'data_vg': 'ceph-d02f715b-f6fc-5dd9-afa3-4d404d1973db'})  2026-03-08 00:42:41.359753 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-06971c7f-d1d9-5519-989d-752a08544c4e', 'data_vg': 'ceph-06971c7f-d1d9-5519-989d-752a08544c4e'})  2026-03-08 00:42:41.359859 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:41.359875 | orchestrator | 2026-03-08 00:42:41.359889 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-08 00:42:41.359901 | orchestrator | Sunday 08 March 2026 00:42:36 +0000 (0:00:00.197) 0:00:20.434 ********** 2026-03-08 00:42:41.359913 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d02f715b-f6fc-5dd9-afa3-4d404d1973db', 'data_vg': 'ceph-d02f715b-f6fc-5dd9-afa3-4d404d1973db'})  2026-03-08 00:42:41.359924 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-06971c7f-d1d9-5519-989d-752a08544c4e', 'data_vg': 'ceph-06971c7f-d1d9-5519-989d-752a08544c4e'})  2026-03-08 00:42:41.359942 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:41.359962 | orchestrator | 2026-03-08 00:42:41.360016 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-08 00:42:41.360039 | orchestrator | Sunday 08 March 2026 00:42:36 +0000 (0:00:00.147) 0:00:20.582 ********** 2026-03-08 00:42:41.360058 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d02f715b-f6fc-5dd9-afa3-4d404d1973db', 'data_vg': 'ceph-d02f715b-f6fc-5dd9-afa3-4d404d1973db'})  2026-03-08 00:42:41.360078 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-06971c7f-d1d9-5519-989d-752a08544c4e', 'data_vg': 'ceph-06971c7f-d1d9-5519-989d-752a08544c4e'})  2026-03-08 00:42:41.360099 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:41.360118 | orchestrator | 2026-03-08 00:42:41.360139 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-08 00:42:41.360160 | orchestrator | Sunday 08 March 2026 00:42:36 +0000 (0:00:00.177) 0:00:20.760 ********** 2026-03-08 00:42:41.360180 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:42:41.360202 | orchestrator | 2026-03-08 00:42:41.360223 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-08 00:42:41.360243 | orchestrator | Sunday 08 March 2026 00:42:37 +0000 (0:00:00.462) 0:00:21.223 ********** 2026-03-08 00:42:41.360298 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:42:41.360317 | orchestrator | 2026-03-08 00:42:41.360336 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-08 00:42:41.360356 | orchestrator | Sunday 08 March 2026 00:42:37 +0000 (0:00:00.466) 0:00:21.690 ********** 2026-03-08 00:42:41.360374 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:42:41.360393 | orchestrator | 2026-03-08 00:42:41.360411 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-08 00:42:41.360430 | orchestrator | Sunday 08 March 2026 00:42:37 +0000 (0:00:00.151) 0:00:21.841 ********** 2026-03-08 00:42:41.360449 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-06971c7f-d1d9-5519-989d-752a08544c4e', 'vg_name': 'ceph-06971c7f-d1d9-5519-989d-752a08544c4e'}) 2026-03-08 00:42:41.360478 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-d02f715b-f6fc-5dd9-afa3-4d404d1973db', 'vg_name': 'ceph-d02f715b-f6fc-5dd9-afa3-4d404d1973db'}) 2026-03-08 00:42:41.360496 | orchestrator | 2026-03-08 00:42:41.360514 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-08 00:42:41.360532 | orchestrator | Sunday 08 March 2026 00:42:37 +0000 (0:00:00.142) 0:00:21.983 ********** 2026-03-08 00:42:41.360550 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d02f715b-f6fc-5dd9-afa3-4d404d1973db', 'data_vg': 'ceph-d02f715b-f6fc-5dd9-afa3-4d404d1973db'})  2026-03-08 00:42:41.360597 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-06971c7f-d1d9-5519-989d-752a08544c4e', 'data_vg': 'ceph-06971c7f-d1d9-5519-989d-752a08544c4e'})  2026-03-08 00:42:41.360617 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:41.360637 | orchestrator | 2026-03-08 00:42:41.360656 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-08 00:42:41.360673 | orchestrator | Sunday 08 March 2026 00:42:38 +0000 (0:00:00.292) 0:00:22.275 ********** 2026-03-08 00:42:41.360691 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d02f715b-f6fc-5dd9-afa3-4d404d1973db', 'data_vg': 'ceph-d02f715b-f6fc-5dd9-afa3-4d404d1973db'})  2026-03-08 00:42:41.360710 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-06971c7f-d1d9-5519-989d-752a08544c4e', 'data_vg': 'ceph-06971c7f-d1d9-5519-989d-752a08544c4e'})  2026-03-08 00:42:41.360730 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:41.360749 | orchestrator | 2026-03-08 00:42:41.360770 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-08 00:42:41.360790 | orchestrator | Sunday 08 March 2026 00:42:38 +0000 (0:00:00.136) 0:00:22.412 ********** 2026-03-08 00:42:41.360807 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d02f715b-f6fc-5dd9-afa3-4d404d1973db', 'data_vg': 'ceph-d02f715b-f6fc-5dd9-afa3-4d404d1973db'})  2026-03-08 00:42:41.360827 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-06971c7f-d1d9-5519-989d-752a08544c4e', 'data_vg': 'ceph-06971c7f-d1d9-5519-989d-752a08544c4e'})  2026-03-08 00:42:41.360847 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:41.360866 | orchestrator | 2026-03-08 00:42:41.360883 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-08 00:42:41.360901 | orchestrator | Sunday 08 March 2026 00:42:38 +0000 (0:00:00.168) 0:00:22.580 ********** 2026-03-08 00:42:41.360939 | orchestrator | ok: [testbed-node-3] => { 2026-03-08 00:42:41.360952 | orchestrator |  "lvm_report": { 2026-03-08 00:42:41.360963 | orchestrator |  "lv": [ 2026-03-08 00:42:41.360975 | orchestrator |  { 2026-03-08 00:42:41.360986 | orchestrator |  "lv_name": "osd-block-06971c7f-d1d9-5519-989d-752a08544c4e", 2026-03-08 00:42:41.360998 | orchestrator |  "vg_name": "ceph-06971c7f-d1d9-5519-989d-752a08544c4e" 2026-03-08 00:42:41.361008 | orchestrator |  }, 2026-03-08 00:42:41.361019 | orchestrator |  { 2026-03-08 00:42:41.361030 | orchestrator |  "lv_name": "osd-block-d02f715b-f6fc-5dd9-afa3-4d404d1973db", 2026-03-08 00:42:41.361041 | orchestrator |  "vg_name": "ceph-d02f715b-f6fc-5dd9-afa3-4d404d1973db" 2026-03-08 00:42:41.361052 | orchestrator |  } 2026-03-08 00:42:41.361062 | orchestrator |  ], 2026-03-08 00:42:41.361073 | orchestrator |  "pv": [ 2026-03-08 00:42:41.361084 | orchestrator |  { 2026-03-08 00:42:41.361103 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-08 00:42:41.361120 | orchestrator |  "vg_name": "ceph-d02f715b-f6fc-5dd9-afa3-4d404d1973db" 2026-03-08 00:42:41.361138 | orchestrator |  }, 2026-03-08 00:42:41.361157 | orchestrator |  { 2026-03-08 00:42:41.361174 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-08 00:42:41.361192 | orchestrator |  "vg_name": "ceph-06971c7f-d1d9-5519-989d-752a08544c4e" 2026-03-08 00:42:41.361211 | orchestrator |  } 2026-03-08 00:42:41.361228 | orchestrator |  ] 2026-03-08 00:42:41.361271 | orchestrator |  } 2026-03-08 00:42:41.361291 | orchestrator | } 2026-03-08 00:42:41.361309 | orchestrator | 2026-03-08 00:42:41.361327 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-08 00:42:41.361344 | orchestrator | 2026-03-08 00:42:41.361363 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-08 00:42:41.361381 | orchestrator | Sunday 08 March 2026 00:42:38 +0000 (0:00:00.288) 0:00:22.869 ********** 2026-03-08 00:42:41.361417 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-08 00:42:41.361435 | orchestrator | 2026-03-08 00:42:41.361454 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-08 00:42:41.361468 | orchestrator | Sunday 08 March 2026 00:42:38 +0000 (0:00:00.225) 0:00:23.094 ********** 2026-03-08 00:42:41.361478 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:42:41.361489 | orchestrator | 2026-03-08 00:42:41.361500 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:41.361510 | orchestrator | Sunday 08 March 2026 00:42:39 +0000 (0:00:00.222) 0:00:23.317 ********** 2026-03-08 00:42:41.361521 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-08 00:42:41.361532 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-08 00:42:41.361542 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-08 00:42:41.361553 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-08 00:42:41.361564 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-08 00:42:41.361574 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-08 00:42:41.361585 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-08 00:42:41.361604 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-08 00:42:41.361615 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-08 00:42:41.361626 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-08 00:42:41.361637 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-08 00:42:41.361647 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-08 00:42:41.361658 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-08 00:42:41.361669 | orchestrator | 2026-03-08 00:42:41.361680 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:41.361690 | orchestrator | Sunday 08 March 2026 00:42:39 +0000 (0:00:00.491) 0:00:23.809 ********** 2026-03-08 00:42:41.361701 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:41.361711 | orchestrator | 2026-03-08 00:42:41.361722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:41.361733 | orchestrator | Sunday 08 March 2026 00:42:39 +0000 (0:00:00.201) 0:00:24.010 ********** 2026-03-08 00:42:41.361744 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:41.361755 | orchestrator | 2026-03-08 00:42:41.361766 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:41.361777 | orchestrator | Sunday 08 March 2026 00:42:40 +0000 (0:00:00.189) 0:00:24.199 ********** 2026-03-08 00:42:41.361787 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:41.361798 | orchestrator | 2026-03-08 00:42:41.361809 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:41.361819 | orchestrator | Sunday 08 March 2026 00:42:40 +0000 (0:00:00.664) 0:00:24.864 ********** 2026-03-08 00:42:41.361830 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:41.361841 | orchestrator | 2026-03-08 00:42:41.361851 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:41.361862 | orchestrator | Sunday 08 March 2026 00:42:40 +0000 (0:00:00.207) 0:00:25.071 ********** 2026-03-08 00:42:41.361873 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:41.361884 | orchestrator | 2026-03-08 00:42:41.361894 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:41.361905 | orchestrator | Sunday 08 March 2026 00:42:41 +0000 (0:00:00.211) 0:00:25.283 ********** 2026-03-08 00:42:41.361923 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:41.361934 | orchestrator | 2026-03-08 00:42:41.361957 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:53.022625 | orchestrator | Sunday 08 March 2026 00:42:41 +0000 (0:00:00.209) 0:00:25.492 ********** 2026-03-08 00:42:53.022728 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:53.022742 | orchestrator | 2026-03-08 00:42:53.022754 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:53.022764 | orchestrator | Sunday 08 March 2026 00:42:41 +0000 (0:00:00.192) 0:00:25.685 ********** 2026-03-08 00:42:53.022774 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:53.022784 | orchestrator | 2026-03-08 00:42:53.022794 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:53.022803 | orchestrator | Sunday 08 March 2026 00:42:41 +0000 (0:00:00.221) 0:00:25.907 ********** 2026-03-08 00:42:53.022813 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633) 2026-03-08 00:42:53.022824 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633) 2026-03-08 00:42:53.022833 | orchestrator | 2026-03-08 00:42:53.022843 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:53.022853 | orchestrator | Sunday 08 March 2026 00:42:42 +0000 (0:00:00.460) 0:00:26.368 ********** 2026-03-08 00:42:53.022862 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2c822381-711e-4b88-8f0f-ccd9d68009a2) 2026-03-08 00:42:53.022872 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2c822381-711e-4b88-8f0f-ccd9d68009a2) 2026-03-08 00:42:53.022882 | orchestrator | 2026-03-08 00:42:53.022891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:53.022901 | orchestrator | Sunday 08 March 2026 00:42:42 +0000 (0:00:00.449) 0:00:26.817 ********** 2026-03-08 00:42:53.022910 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_272aa0da-7148-40c6-996c-fa485e579a0c) 2026-03-08 00:42:53.022920 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_272aa0da-7148-40c6-996c-fa485e579a0c) 2026-03-08 00:42:53.022929 | orchestrator | 2026-03-08 00:42:53.022939 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:53.022949 | orchestrator | Sunday 08 March 2026 00:42:43 +0000 (0:00:00.437) 0:00:27.255 ********** 2026-03-08 00:42:53.022958 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_584a8cd2-f1cf-4783-b73b-bdfda5fabfa8) 2026-03-08 00:42:53.022968 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_584a8cd2-f1cf-4783-b73b-bdfda5fabfa8) 2026-03-08 00:42:53.022978 | orchestrator | 2026-03-08 00:42:53.022987 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:53.022997 | orchestrator | Sunday 08 March 2026 00:42:43 +0000 (0:00:00.654) 0:00:27.909 ********** 2026-03-08 00:42:53.023006 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-08 00:42:53.023016 | orchestrator | 2026-03-08 00:42:53.023025 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:53.023035 | orchestrator | Sunday 08 March 2026 00:42:44 +0000 (0:00:00.580) 0:00:28.489 ********** 2026-03-08 00:42:53.023044 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-08 00:42:53.023055 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-08 00:42:53.023064 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-08 00:42:53.023074 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-08 00:42:53.023084 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-08 00:42:53.023093 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-08 00:42:53.023125 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-08 00:42:53.023137 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-08 00:42:53.023148 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-08 00:42:53.023159 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-08 00:42:53.023170 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-08 00:42:53.023181 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-08 00:42:53.023192 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-08 00:42:53.023202 | orchestrator | 2026-03-08 00:42:53.023214 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:53.023225 | orchestrator | Sunday 08 March 2026 00:42:45 +0000 (0:00:00.921) 0:00:29.410 ********** 2026-03-08 00:42:53.023257 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:53.023269 | orchestrator | 2026-03-08 00:42:53.023280 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:53.023309 | orchestrator | Sunday 08 March 2026 00:42:45 +0000 (0:00:00.213) 0:00:29.624 ********** 2026-03-08 00:42:53.023321 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:53.023332 | orchestrator | 2026-03-08 00:42:53.023344 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:53.023356 | orchestrator | Sunday 08 March 2026 00:42:45 +0000 (0:00:00.221) 0:00:29.846 ********** 2026-03-08 00:42:53.023367 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:53.023378 | orchestrator | 2026-03-08 00:42:53.023418 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:53.023431 | orchestrator | Sunday 08 March 2026 00:42:45 +0000 (0:00:00.234) 0:00:30.081 ********** 2026-03-08 00:42:53.023452 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:53.023463 | orchestrator | 2026-03-08 00:42:53.023475 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:53.023486 | orchestrator | Sunday 08 March 2026 00:42:46 +0000 (0:00:00.213) 0:00:30.294 ********** 2026-03-08 00:42:53.023497 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:53.023507 | orchestrator | 2026-03-08 00:42:53.023517 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:53.023526 | orchestrator | Sunday 08 March 2026 00:42:46 +0000 (0:00:00.220) 0:00:30.515 ********** 2026-03-08 00:42:53.023536 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:53.023545 | orchestrator | 2026-03-08 00:42:53.023555 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:53.023565 | orchestrator | Sunday 08 March 2026 00:42:46 +0000 (0:00:00.232) 0:00:30.748 ********** 2026-03-08 00:42:53.023574 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:53.023584 | orchestrator | 2026-03-08 00:42:53.023594 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:53.023603 | orchestrator | Sunday 08 March 2026 00:42:46 +0000 (0:00:00.248) 0:00:30.997 ********** 2026-03-08 00:42:53.023613 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:53.023623 | orchestrator | 2026-03-08 00:42:53.023632 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:53.023642 | orchestrator | Sunday 08 March 2026 00:42:47 +0000 (0:00:00.221) 0:00:31.218 ********** 2026-03-08 00:42:53.023651 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-08 00:42:53.023661 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-08 00:42:53.023671 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-08 00:42:53.023681 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-08 00:42:53.023690 | orchestrator | 2026-03-08 00:42:53.023700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:53.023717 | orchestrator | Sunday 08 March 2026 00:42:47 +0000 (0:00:00.901) 0:00:32.120 ********** 2026-03-08 00:42:53.023727 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:53.023737 | orchestrator | 2026-03-08 00:42:53.023746 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:53.023756 | orchestrator | Sunday 08 March 2026 00:42:48 +0000 (0:00:00.208) 0:00:32.329 ********** 2026-03-08 00:42:53.023765 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:53.023775 | orchestrator | 2026-03-08 00:42:53.023784 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:53.023794 | orchestrator | Sunday 08 March 2026 00:42:48 +0000 (0:00:00.714) 0:00:33.044 ********** 2026-03-08 00:42:53.023804 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:53.023813 | orchestrator | 2026-03-08 00:42:53.023823 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:53.023832 | orchestrator | Sunday 08 March 2026 00:42:49 +0000 (0:00:00.229) 0:00:33.273 ********** 2026-03-08 00:42:53.023842 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:53.023851 | orchestrator | 2026-03-08 00:42:53.023861 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-08 00:42:53.023875 | orchestrator | Sunday 08 March 2026 00:42:49 +0000 (0:00:00.223) 0:00:33.496 ********** 2026-03-08 00:42:53.023885 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:53.023894 | orchestrator | 2026-03-08 00:42:53.023904 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-08 00:42:53.023913 | orchestrator | Sunday 08 March 2026 00:42:49 +0000 (0:00:00.156) 0:00:33.653 ********** 2026-03-08 00:42:53.023923 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a9457a91-34ca-5e42-9332-0f1ee38194fb'}}) 2026-03-08 00:42:53.023933 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ccaad6c6-3747-58dc-9b51-af637ea3a93d'}}) 2026-03-08 00:42:53.023943 | orchestrator | 2026-03-08 00:42:53.023952 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-08 00:42:53.023984 | orchestrator | Sunday 08 March 2026 00:42:49 +0000 (0:00:00.204) 0:00:33.857 ********** 2026-03-08 00:42:53.023996 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a9457a91-34ca-5e42-9332-0f1ee38194fb', 'data_vg': 'ceph-a9457a91-34ca-5e42-9332-0f1ee38194fb'}) 2026-03-08 00:42:53.024007 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ccaad6c6-3747-58dc-9b51-af637ea3a93d', 'data_vg': 'ceph-ccaad6c6-3747-58dc-9b51-af637ea3a93d'}) 2026-03-08 00:42:53.024017 | orchestrator | 2026-03-08 00:42:53.024027 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-08 00:42:53.024036 | orchestrator | Sunday 08 March 2026 00:42:51 +0000 (0:00:01.800) 0:00:35.658 ********** 2026-03-08 00:42:53.024046 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9457a91-34ca-5e42-9332-0f1ee38194fb', 'data_vg': 'ceph-a9457a91-34ca-5e42-9332-0f1ee38194fb'})  2026-03-08 00:42:53.024057 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccaad6c6-3747-58dc-9b51-af637ea3a93d', 'data_vg': 'ceph-ccaad6c6-3747-58dc-9b51-af637ea3a93d'})  2026-03-08 00:42:53.024066 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:53.024076 | orchestrator | 2026-03-08 00:42:53.024086 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-08 00:42:53.024095 | orchestrator | Sunday 08 March 2026 00:42:51 +0000 (0:00:00.176) 0:00:35.834 ********** 2026-03-08 00:42:53.024105 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a9457a91-34ca-5e42-9332-0f1ee38194fb', 'data_vg': 'ceph-a9457a91-34ca-5e42-9332-0f1ee38194fb'}) 2026-03-08 00:42:53.024121 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ccaad6c6-3747-58dc-9b51-af637ea3a93d', 'data_vg': 'ceph-ccaad6c6-3747-58dc-9b51-af637ea3a93d'}) 2026-03-08 00:42:58.101489 | orchestrator | 2026-03-08 00:42:58.101603 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-08 00:42:58.101645 | orchestrator | Sunday 08 March 2026 00:42:53 +0000 (0:00:01.317) 0:00:37.151 ********** 2026-03-08 00:42:58.101656 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9457a91-34ca-5e42-9332-0f1ee38194fb', 'data_vg': 'ceph-a9457a91-34ca-5e42-9332-0f1ee38194fb'})  2026-03-08 00:42:58.101666 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccaad6c6-3747-58dc-9b51-af637ea3a93d', 'data_vg': 'ceph-ccaad6c6-3747-58dc-9b51-af637ea3a93d'})  2026-03-08 00:42:58.101675 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:58.101686 | orchestrator | 2026-03-08 00:42:58.101732 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-08 00:42:58.101743 | orchestrator | Sunday 08 March 2026 00:42:53 +0000 (0:00:00.143) 0:00:37.295 ********** 2026-03-08 00:42:58.101752 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:58.101761 | orchestrator | 2026-03-08 00:42:58.101770 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-08 00:42:58.101778 | orchestrator | Sunday 08 March 2026 00:42:53 +0000 (0:00:00.121) 0:00:37.416 ********** 2026-03-08 00:42:58.101787 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9457a91-34ca-5e42-9332-0f1ee38194fb', 'data_vg': 'ceph-a9457a91-34ca-5e42-9332-0f1ee38194fb'})  2026-03-08 00:42:58.101795 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccaad6c6-3747-58dc-9b51-af637ea3a93d', 'data_vg': 'ceph-ccaad6c6-3747-58dc-9b51-af637ea3a93d'})  2026-03-08 00:42:58.101804 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:58.101812 | orchestrator | 2026-03-08 00:42:58.101821 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-08 00:42:58.101829 | orchestrator | Sunday 08 March 2026 00:42:53 +0000 (0:00:00.140) 0:00:37.557 ********** 2026-03-08 00:42:58.101838 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:58.101847 | orchestrator | 2026-03-08 00:42:58.101855 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-08 00:42:58.101864 | orchestrator | Sunday 08 March 2026 00:42:53 +0000 (0:00:00.124) 0:00:37.682 ********** 2026-03-08 00:42:58.101873 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9457a91-34ca-5e42-9332-0f1ee38194fb', 'data_vg': 'ceph-a9457a91-34ca-5e42-9332-0f1ee38194fb'})  2026-03-08 00:42:58.101882 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccaad6c6-3747-58dc-9b51-af637ea3a93d', 'data_vg': 'ceph-ccaad6c6-3747-58dc-9b51-af637ea3a93d'})  2026-03-08 00:42:58.101890 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:58.101899 | orchestrator | 2026-03-08 00:42:58.101908 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-08 00:42:58.101932 | orchestrator | Sunday 08 March 2026 00:42:53 +0000 (0:00:00.275) 0:00:37.957 ********** 2026-03-08 00:42:58.101941 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:58.101950 | orchestrator | 2026-03-08 00:42:58.101959 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-08 00:42:58.101966 | orchestrator | Sunday 08 March 2026 00:42:53 +0000 (0:00:00.127) 0:00:38.085 ********** 2026-03-08 00:42:58.101971 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9457a91-34ca-5e42-9332-0f1ee38194fb', 'data_vg': 'ceph-a9457a91-34ca-5e42-9332-0f1ee38194fb'})  2026-03-08 00:42:58.101976 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccaad6c6-3747-58dc-9b51-af637ea3a93d', 'data_vg': 'ceph-ccaad6c6-3747-58dc-9b51-af637ea3a93d'})  2026-03-08 00:42:58.101982 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:58.101987 | orchestrator | 2026-03-08 00:42:58.101992 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-08 00:42:58.101997 | orchestrator | Sunday 08 March 2026 00:42:54 +0000 (0:00:00.128) 0:00:38.213 ********** 2026-03-08 00:42:58.102002 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:42:58.102008 | orchestrator | 2026-03-08 00:42:58.102050 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-08 00:42:58.102066 | orchestrator | Sunday 08 March 2026 00:42:54 +0000 (0:00:00.127) 0:00:38.341 ********** 2026-03-08 00:42:58.102072 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9457a91-34ca-5e42-9332-0f1ee38194fb', 'data_vg': 'ceph-a9457a91-34ca-5e42-9332-0f1ee38194fb'})  2026-03-08 00:42:58.102078 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccaad6c6-3747-58dc-9b51-af637ea3a93d', 'data_vg': 'ceph-ccaad6c6-3747-58dc-9b51-af637ea3a93d'})  2026-03-08 00:42:58.102084 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:58.102090 | orchestrator | 2026-03-08 00:42:58.102096 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-08 00:42:58.102102 | orchestrator | Sunday 08 March 2026 00:42:54 +0000 (0:00:00.134) 0:00:38.475 ********** 2026-03-08 00:42:58.102108 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9457a91-34ca-5e42-9332-0f1ee38194fb', 'data_vg': 'ceph-a9457a91-34ca-5e42-9332-0f1ee38194fb'})  2026-03-08 00:42:58.102133 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccaad6c6-3747-58dc-9b51-af637ea3a93d', 'data_vg': 'ceph-ccaad6c6-3747-58dc-9b51-af637ea3a93d'})  2026-03-08 00:42:58.102139 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:58.102145 | orchestrator | 2026-03-08 00:42:58.102151 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-08 00:42:58.102174 | orchestrator | Sunday 08 March 2026 00:42:54 +0000 (0:00:00.136) 0:00:38.611 ********** 2026-03-08 00:42:58.102181 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9457a91-34ca-5e42-9332-0f1ee38194fb', 'data_vg': 'ceph-a9457a91-34ca-5e42-9332-0f1ee38194fb'})  2026-03-08 00:42:58.102187 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccaad6c6-3747-58dc-9b51-af637ea3a93d', 'data_vg': 'ceph-ccaad6c6-3747-58dc-9b51-af637ea3a93d'})  2026-03-08 00:42:58.102193 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:58.102199 | orchestrator | 2026-03-08 00:42:58.102204 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-08 00:42:58.102211 | orchestrator | Sunday 08 March 2026 00:42:54 +0000 (0:00:00.146) 0:00:38.758 ********** 2026-03-08 00:42:58.102216 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:58.102222 | orchestrator | 2026-03-08 00:42:58.102245 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-08 00:42:58.102252 | orchestrator | Sunday 08 March 2026 00:42:54 +0000 (0:00:00.120) 0:00:38.879 ********** 2026-03-08 00:42:58.102258 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:58.102264 | orchestrator | 2026-03-08 00:42:58.102270 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-08 00:42:58.102276 | orchestrator | Sunday 08 March 2026 00:42:54 +0000 (0:00:00.140) 0:00:39.019 ********** 2026-03-08 00:42:58.102282 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:58.102291 | orchestrator | 2026-03-08 00:42:58.102300 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-08 00:42:58.102309 | orchestrator | Sunday 08 March 2026 00:42:55 +0000 (0:00:00.142) 0:00:39.162 ********** 2026-03-08 00:42:58.102317 | orchestrator | ok: [testbed-node-4] => { 2026-03-08 00:42:58.102326 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-08 00:42:58.102335 | orchestrator | } 2026-03-08 00:42:58.102345 | orchestrator | 2026-03-08 00:42:58.102353 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-08 00:42:58.102362 | orchestrator | Sunday 08 March 2026 00:42:55 +0000 (0:00:00.151) 0:00:39.314 ********** 2026-03-08 00:42:58.102370 | orchestrator | ok: [testbed-node-4] => { 2026-03-08 00:42:58.102380 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-08 00:42:58.102388 | orchestrator | } 2026-03-08 00:42:58.102398 | orchestrator | 2026-03-08 00:42:58.102407 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-08 00:42:58.102416 | orchestrator | Sunday 08 March 2026 00:42:55 +0000 (0:00:00.128) 0:00:39.442 ********** 2026-03-08 00:42:58.102433 | orchestrator | ok: [testbed-node-4] => { 2026-03-08 00:42:58.102442 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-08 00:42:58.102447 | orchestrator | } 2026-03-08 00:42:58.102452 | orchestrator | 2026-03-08 00:42:58.102457 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-08 00:42:58.102463 | orchestrator | Sunday 08 March 2026 00:42:55 +0000 (0:00:00.267) 0:00:39.710 ********** 2026-03-08 00:42:58.102468 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:42:58.102473 | orchestrator | 2026-03-08 00:42:58.102478 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-08 00:42:58.102483 | orchestrator | Sunday 08 March 2026 00:42:56 +0000 (0:00:00.441) 0:00:40.152 ********** 2026-03-08 00:42:58.102489 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:42:58.102494 | orchestrator | 2026-03-08 00:42:58.102499 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-08 00:42:58.102504 | orchestrator | Sunday 08 March 2026 00:42:56 +0000 (0:00:00.453) 0:00:40.605 ********** 2026-03-08 00:42:58.102510 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:42:58.102515 | orchestrator | 2026-03-08 00:42:58.102520 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-08 00:42:58.102526 | orchestrator | Sunday 08 March 2026 00:42:56 +0000 (0:00:00.491) 0:00:41.097 ********** 2026-03-08 00:42:58.102531 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:42:58.102536 | orchestrator | 2026-03-08 00:42:58.102541 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-08 00:42:58.102546 | orchestrator | Sunday 08 March 2026 00:42:57 +0000 (0:00:00.159) 0:00:41.257 ********** 2026-03-08 00:42:58.102551 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:58.102556 | orchestrator | 2026-03-08 00:42:58.102561 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-08 00:42:58.102566 | orchestrator | Sunday 08 March 2026 00:42:57 +0000 (0:00:00.099) 0:00:41.357 ********** 2026-03-08 00:42:58.102571 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:58.102577 | orchestrator | 2026-03-08 00:42:58.102582 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-08 00:42:58.102587 | orchestrator | Sunday 08 March 2026 00:42:57 +0000 (0:00:00.137) 0:00:41.494 ********** 2026-03-08 00:42:58.102592 | orchestrator | ok: [testbed-node-4] => { 2026-03-08 00:42:58.102597 | orchestrator |  "vgs_report": { 2026-03-08 00:42:58.102603 | orchestrator |  "vg": [] 2026-03-08 00:42:58.102608 | orchestrator |  } 2026-03-08 00:42:58.102613 | orchestrator | } 2026-03-08 00:42:58.102618 | orchestrator | 2026-03-08 00:42:58.102624 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-08 00:42:58.102629 | orchestrator | Sunday 08 March 2026 00:42:57 +0000 (0:00:00.166) 0:00:41.661 ********** 2026-03-08 00:42:58.102634 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:58.102639 | orchestrator | 2026-03-08 00:42:58.102644 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-08 00:42:58.102649 | orchestrator | Sunday 08 March 2026 00:42:57 +0000 (0:00:00.151) 0:00:41.812 ********** 2026-03-08 00:42:58.102654 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:58.102659 | orchestrator | 2026-03-08 00:42:58.102665 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-08 00:42:58.102670 | orchestrator | Sunday 08 March 2026 00:42:57 +0000 (0:00:00.139) 0:00:41.952 ********** 2026-03-08 00:42:58.102675 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:58.102680 | orchestrator | 2026-03-08 00:42:58.102685 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-08 00:42:58.102698 | orchestrator | Sunday 08 March 2026 00:42:57 +0000 (0:00:00.140) 0:00:42.093 ********** 2026-03-08 00:42:58.102703 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:58.102708 | orchestrator | 2026-03-08 00:42:58.102719 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-08 00:43:03.100574 | orchestrator | Sunday 08 March 2026 00:42:58 +0000 (0:00:00.139) 0:00:42.232 ********** 2026-03-08 00:43:03.100706 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:03.100724 | orchestrator | 2026-03-08 00:43:03.100738 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-08 00:43:03.100787 | orchestrator | Sunday 08 March 2026 00:42:58 +0000 (0:00:00.340) 0:00:42.572 ********** 2026-03-08 00:43:03.100801 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:03.100811 | orchestrator | 2026-03-08 00:43:03.100822 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-08 00:43:03.100833 | orchestrator | Sunday 08 March 2026 00:42:58 +0000 (0:00:00.146) 0:00:42.719 ********** 2026-03-08 00:43:03.100844 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:03.100855 | orchestrator | 2026-03-08 00:43:03.100866 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-08 00:43:03.100876 | orchestrator | Sunday 08 March 2026 00:42:58 +0000 (0:00:00.161) 0:00:42.881 ********** 2026-03-08 00:43:03.100887 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:03.100898 | orchestrator | 2026-03-08 00:43:03.100909 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-08 00:43:03.100920 | orchestrator | Sunday 08 March 2026 00:42:58 +0000 (0:00:00.139) 0:00:43.020 ********** 2026-03-08 00:43:03.100930 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:03.100941 | orchestrator | 2026-03-08 00:43:03.100952 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-08 00:43:03.100963 | orchestrator | Sunday 08 March 2026 00:42:59 +0000 (0:00:00.143) 0:00:43.164 ********** 2026-03-08 00:43:03.100973 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:03.100984 | orchestrator | 2026-03-08 00:43:03.100995 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-08 00:43:03.101005 | orchestrator | Sunday 08 March 2026 00:42:59 +0000 (0:00:00.132) 0:00:43.296 ********** 2026-03-08 00:43:03.101016 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:03.101027 | orchestrator | 2026-03-08 00:43:03.101037 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-08 00:43:03.101048 | orchestrator | Sunday 08 March 2026 00:42:59 +0000 (0:00:00.139) 0:00:43.436 ********** 2026-03-08 00:43:03.101059 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:03.101070 | orchestrator | 2026-03-08 00:43:03.101080 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-08 00:43:03.101091 | orchestrator | Sunday 08 March 2026 00:42:59 +0000 (0:00:00.155) 0:00:43.591 ********** 2026-03-08 00:43:03.101102 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:03.101113 | orchestrator | 2026-03-08 00:43:03.101126 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-08 00:43:03.101139 | orchestrator | Sunday 08 March 2026 00:42:59 +0000 (0:00:00.152) 0:00:43.743 ********** 2026-03-08 00:43:03.101152 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:03.101165 | orchestrator | 2026-03-08 00:43:03.101178 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-08 00:43:03.101202 | orchestrator | Sunday 08 March 2026 00:42:59 +0000 (0:00:00.137) 0:00:43.881 ********** 2026-03-08 00:43:03.101216 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9457a91-34ca-5e42-9332-0f1ee38194fb', 'data_vg': 'ceph-a9457a91-34ca-5e42-9332-0f1ee38194fb'})  2026-03-08 00:43:03.101259 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccaad6c6-3747-58dc-9b51-af637ea3a93d', 'data_vg': 'ceph-ccaad6c6-3747-58dc-9b51-af637ea3a93d'})  2026-03-08 00:43:03.101279 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:03.101300 | orchestrator | 2026-03-08 00:43:03.101320 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-08 00:43:03.101339 | orchestrator | Sunday 08 March 2026 00:42:59 +0000 (0:00:00.161) 0:00:44.043 ********** 2026-03-08 00:43:03.101353 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9457a91-34ca-5e42-9332-0f1ee38194fb', 'data_vg': 'ceph-a9457a91-34ca-5e42-9332-0f1ee38194fb'})  2026-03-08 00:43:03.101376 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccaad6c6-3747-58dc-9b51-af637ea3a93d', 'data_vg': 'ceph-ccaad6c6-3747-58dc-9b51-af637ea3a93d'})  2026-03-08 00:43:03.101389 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:03.101402 | orchestrator | 2026-03-08 00:43:03.101415 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-08 00:43:03.101427 | orchestrator | Sunday 08 March 2026 00:43:00 +0000 (0:00:00.165) 0:00:44.209 ********** 2026-03-08 00:43:03.101445 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9457a91-34ca-5e42-9332-0f1ee38194fb', 'data_vg': 'ceph-a9457a91-34ca-5e42-9332-0f1ee38194fb'})  2026-03-08 00:43:03.101464 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccaad6c6-3747-58dc-9b51-af637ea3a93d', 'data_vg': 'ceph-ccaad6c6-3747-58dc-9b51-af637ea3a93d'})  2026-03-08 00:43:03.101483 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:03.101497 | orchestrator | 2026-03-08 00:43:03.101508 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-08 00:43:03.101519 | orchestrator | Sunday 08 March 2026 00:43:00 +0000 (0:00:00.450) 0:00:44.659 ********** 2026-03-08 00:43:03.101530 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9457a91-34ca-5e42-9332-0f1ee38194fb', 'data_vg': 'ceph-a9457a91-34ca-5e42-9332-0f1ee38194fb'})  2026-03-08 00:43:03.101541 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccaad6c6-3747-58dc-9b51-af637ea3a93d', 'data_vg': 'ceph-ccaad6c6-3747-58dc-9b51-af637ea3a93d'})  2026-03-08 00:43:03.101552 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:03.101562 | orchestrator | 2026-03-08 00:43:03.101591 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-08 00:43:03.101602 | orchestrator | Sunday 08 March 2026 00:43:00 +0000 (0:00:00.177) 0:00:44.837 ********** 2026-03-08 00:43:03.101613 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9457a91-34ca-5e42-9332-0f1ee38194fb', 'data_vg': 'ceph-a9457a91-34ca-5e42-9332-0f1ee38194fb'})  2026-03-08 00:43:03.101624 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccaad6c6-3747-58dc-9b51-af637ea3a93d', 'data_vg': 'ceph-ccaad6c6-3747-58dc-9b51-af637ea3a93d'})  2026-03-08 00:43:03.101635 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:03.101645 | orchestrator | 2026-03-08 00:43:03.101656 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-08 00:43:03.101667 | orchestrator | Sunday 08 March 2026 00:43:00 +0000 (0:00:00.172) 0:00:45.010 ********** 2026-03-08 00:43:03.101678 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9457a91-34ca-5e42-9332-0f1ee38194fb', 'data_vg': 'ceph-a9457a91-34ca-5e42-9332-0f1ee38194fb'})  2026-03-08 00:43:03.101689 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccaad6c6-3747-58dc-9b51-af637ea3a93d', 'data_vg': 'ceph-ccaad6c6-3747-58dc-9b51-af637ea3a93d'})  2026-03-08 00:43:03.101700 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:03.101711 | orchestrator | 2026-03-08 00:43:03.101723 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-08 00:43:03.101734 | orchestrator | Sunday 08 March 2026 00:43:01 +0000 (0:00:00.161) 0:00:45.171 ********** 2026-03-08 00:43:03.101745 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9457a91-34ca-5e42-9332-0f1ee38194fb', 'data_vg': 'ceph-a9457a91-34ca-5e42-9332-0f1ee38194fb'})  2026-03-08 00:43:03.101756 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccaad6c6-3747-58dc-9b51-af637ea3a93d', 'data_vg': 'ceph-ccaad6c6-3747-58dc-9b51-af637ea3a93d'})  2026-03-08 00:43:03.101767 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:03.101777 | orchestrator | 2026-03-08 00:43:03.101788 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-08 00:43:03.101799 | orchestrator | Sunday 08 March 2026 00:43:01 +0000 (0:00:00.154) 0:00:45.326 ********** 2026-03-08 00:43:03.101810 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9457a91-34ca-5e42-9332-0f1ee38194fb', 'data_vg': 'ceph-a9457a91-34ca-5e42-9332-0f1ee38194fb'})  2026-03-08 00:43:03.101829 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccaad6c6-3747-58dc-9b51-af637ea3a93d', 'data_vg': 'ceph-ccaad6c6-3747-58dc-9b51-af637ea3a93d'})  2026-03-08 00:43:03.101846 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:03.101857 | orchestrator | 2026-03-08 00:43:03.101868 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-08 00:43:03.101879 | orchestrator | Sunday 08 March 2026 00:43:01 +0000 (0:00:00.170) 0:00:45.497 ********** 2026-03-08 00:43:03.101890 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:43:03.101901 | orchestrator | 2026-03-08 00:43:03.101912 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-08 00:43:03.101923 | orchestrator | Sunday 08 March 2026 00:43:01 +0000 (0:00:00.530) 0:00:46.028 ********** 2026-03-08 00:43:03.101942 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:43:03.101961 | orchestrator | 2026-03-08 00:43:03.101979 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-08 00:43:03.101993 | orchestrator | Sunday 08 March 2026 00:43:02 +0000 (0:00:00.509) 0:00:46.537 ********** 2026-03-08 00:43:03.102004 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:43:03.102015 | orchestrator | 2026-03-08 00:43:03.102080 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-08 00:43:03.102091 | orchestrator | Sunday 08 March 2026 00:43:02 +0000 (0:00:00.168) 0:00:46.706 ********** 2026-03-08 00:43:03.102102 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-a9457a91-34ca-5e42-9332-0f1ee38194fb', 'vg_name': 'ceph-a9457a91-34ca-5e42-9332-0f1ee38194fb'}) 2026-03-08 00:43:03.102114 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-ccaad6c6-3747-58dc-9b51-af637ea3a93d', 'vg_name': 'ceph-ccaad6c6-3747-58dc-9b51-af637ea3a93d'}) 2026-03-08 00:43:03.102125 | orchestrator | 2026-03-08 00:43:03.102136 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-08 00:43:03.102147 | orchestrator | Sunday 08 March 2026 00:43:02 +0000 (0:00:00.194) 0:00:46.900 ********** 2026-03-08 00:43:03.102157 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9457a91-34ca-5e42-9332-0f1ee38194fb', 'data_vg': 'ceph-a9457a91-34ca-5e42-9332-0f1ee38194fb'})  2026-03-08 00:43:03.102169 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccaad6c6-3747-58dc-9b51-af637ea3a93d', 'data_vg': 'ceph-ccaad6c6-3747-58dc-9b51-af637ea3a93d'})  2026-03-08 00:43:03.102179 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:03.102190 | orchestrator | 2026-03-08 00:43:03.102201 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-08 00:43:03.102212 | orchestrator | Sunday 08 March 2026 00:43:02 +0000 (0:00:00.152) 0:00:47.053 ********** 2026-03-08 00:43:03.102223 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9457a91-34ca-5e42-9332-0f1ee38194fb', 'data_vg': 'ceph-a9457a91-34ca-5e42-9332-0f1ee38194fb'})  2026-03-08 00:43:03.102268 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccaad6c6-3747-58dc-9b51-af637ea3a93d', 'data_vg': 'ceph-ccaad6c6-3747-58dc-9b51-af637ea3a93d'})  2026-03-08 00:43:09.311757 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:09.311876 | orchestrator | 2026-03-08 00:43:09.311897 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-08 00:43:09.311913 | orchestrator | Sunday 08 March 2026 00:43:03 +0000 (0:00:00.178) 0:00:47.231 ********** 2026-03-08 00:43:09.311930 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9457a91-34ca-5e42-9332-0f1ee38194fb', 'data_vg': 'ceph-a9457a91-34ca-5e42-9332-0f1ee38194fb'})  2026-03-08 00:43:09.311947 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ccaad6c6-3747-58dc-9b51-af637ea3a93d', 'data_vg': 'ceph-ccaad6c6-3747-58dc-9b51-af637ea3a93d'})  2026-03-08 00:43:09.311961 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:09.311976 | orchestrator | 2026-03-08 00:43:09.311991 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-08 00:43:09.312034 | orchestrator | Sunday 08 March 2026 00:43:03 +0000 (0:00:00.175) 0:00:47.406 ********** 2026-03-08 00:43:09.312050 | orchestrator | ok: [testbed-node-4] => { 2026-03-08 00:43:09.312066 | orchestrator |  "lvm_report": { 2026-03-08 00:43:09.312081 | orchestrator |  "lv": [ 2026-03-08 00:43:09.312096 | orchestrator |  { 2026-03-08 00:43:09.312112 | orchestrator |  "lv_name": "osd-block-a9457a91-34ca-5e42-9332-0f1ee38194fb", 2026-03-08 00:43:09.312127 | orchestrator |  "vg_name": "ceph-a9457a91-34ca-5e42-9332-0f1ee38194fb" 2026-03-08 00:43:09.312143 | orchestrator |  }, 2026-03-08 00:43:09.312157 | orchestrator |  { 2026-03-08 00:43:09.312171 | orchestrator |  "lv_name": "osd-block-ccaad6c6-3747-58dc-9b51-af637ea3a93d", 2026-03-08 00:43:09.312186 | orchestrator |  "vg_name": "ceph-ccaad6c6-3747-58dc-9b51-af637ea3a93d" 2026-03-08 00:43:09.312199 | orchestrator |  } 2026-03-08 00:43:09.312212 | orchestrator |  ], 2026-03-08 00:43:09.312259 | orchestrator |  "pv": [ 2026-03-08 00:43:09.312274 | orchestrator |  { 2026-03-08 00:43:09.312289 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-08 00:43:09.312305 | orchestrator |  "vg_name": "ceph-a9457a91-34ca-5e42-9332-0f1ee38194fb" 2026-03-08 00:43:09.312320 | orchestrator |  }, 2026-03-08 00:43:09.312334 | orchestrator |  { 2026-03-08 00:43:09.312349 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-08 00:43:09.312364 | orchestrator |  "vg_name": "ceph-ccaad6c6-3747-58dc-9b51-af637ea3a93d" 2026-03-08 00:43:09.312379 | orchestrator |  } 2026-03-08 00:43:09.312393 | orchestrator |  ] 2026-03-08 00:43:09.312410 | orchestrator |  } 2026-03-08 00:43:09.312425 | orchestrator | } 2026-03-08 00:43:09.312439 | orchestrator | 2026-03-08 00:43:09.312454 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-08 00:43:09.312469 | orchestrator | 2026-03-08 00:43:09.312484 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-08 00:43:09.312499 | orchestrator | Sunday 08 March 2026 00:43:03 +0000 (0:00:00.580) 0:00:47.987 ********** 2026-03-08 00:43:09.312515 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-08 00:43:09.312530 | orchestrator | 2026-03-08 00:43:09.312545 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-08 00:43:09.312561 | orchestrator | Sunday 08 March 2026 00:43:04 +0000 (0:00:00.297) 0:00:48.284 ********** 2026-03-08 00:43:09.312576 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:43:09.312591 | orchestrator | 2026-03-08 00:43:09.312606 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:43:09.312621 | orchestrator | Sunday 08 March 2026 00:43:04 +0000 (0:00:00.266) 0:00:48.551 ********** 2026-03-08 00:43:09.312635 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-08 00:43:09.312650 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-08 00:43:09.312665 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-08 00:43:09.312680 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-08 00:43:09.312695 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-08 00:43:09.312709 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-08 00:43:09.312723 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-08 00:43:09.312738 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-08 00:43:09.312752 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-08 00:43:09.312766 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-08 00:43:09.312794 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-08 00:43:09.312809 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-08 00:43:09.312823 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-08 00:43:09.312837 | orchestrator | 2026-03-08 00:43:09.312852 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:43:09.312870 | orchestrator | Sunday 08 March 2026 00:43:04 +0000 (0:00:00.487) 0:00:49.038 ********** 2026-03-08 00:43:09.312880 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:09.312888 | orchestrator | 2026-03-08 00:43:09.312897 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:43:09.312905 | orchestrator | Sunday 08 March 2026 00:43:05 +0000 (0:00:00.201) 0:00:49.239 ********** 2026-03-08 00:43:09.312914 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:09.312922 | orchestrator | 2026-03-08 00:43:09.312931 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:43:09.312959 | orchestrator | Sunday 08 March 2026 00:43:05 +0000 (0:00:00.194) 0:00:49.433 ********** 2026-03-08 00:43:09.312968 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:09.312977 | orchestrator | 2026-03-08 00:43:09.312985 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:43:09.312994 | orchestrator | Sunday 08 March 2026 00:43:05 +0000 (0:00:00.207) 0:00:49.641 ********** 2026-03-08 00:43:09.313003 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:09.313011 | orchestrator | 2026-03-08 00:43:09.313020 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:43:09.313028 | orchestrator | Sunday 08 March 2026 00:43:05 +0000 (0:00:00.197) 0:00:49.838 ********** 2026-03-08 00:43:09.313037 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:09.313046 | orchestrator | 2026-03-08 00:43:09.313054 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:43:09.313063 | orchestrator | Sunday 08 March 2026 00:43:06 +0000 (0:00:00.545) 0:00:50.384 ********** 2026-03-08 00:43:09.313071 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:09.313080 | orchestrator | 2026-03-08 00:43:09.313088 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:43:09.313097 | orchestrator | Sunday 08 March 2026 00:43:06 +0000 (0:00:00.184) 0:00:50.569 ********** 2026-03-08 00:43:09.313106 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:09.313114 | orchestrator | 2026-03-08 00:43:09.313123 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:43:09.313131 | orchestrator | Sunday 08 March 2026 00:43:06 +0000 (0:00:00.195) 0:00:50.764 ********** 2026-03-08 00:43:09.313140 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:09.313148 | orchestrator | 2026-03-08 00:43:09.313157 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:43:09.313166 | orchestrator | Sunday 08 March 2026 00:43:06 +0000 (0:00:00.200) 0:00:50.964 ********** 2026-03-08 00:43:09.313174 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7) 2026-03-08 00:43:09.313184 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7) 2026-03-08 00:43:09.313192 | orchestrator | 2026-03-08 00:43:09.313201 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:43:09.313209 | orchestrator | Sunday 08 March 2026 00:43:07 +0000 (0:00:00.401) 0:00:51.365 ********** 2026-03-08 00:43:09.313291 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_875b6ffb-6cc0-40cb-be90-c8d29b416698) 2026-03-08 00:43:09.313304 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_875b6ffb-6cc0-40cb-be90-c8d29b416698) 2026-03-08 00:43:09.313312 | orchestrator | 2026-03-08 00:43:09.313321 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:43:09.313341 | orchestrator | Sunday 08 March 2026 00:43:07 +0000 (0:00:00.424) 0:00:51.790 ********** 2026-03-08 00:43:09.313350 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_09c0cde5-d1e2-470c-ab2a-905eda1e5751) 2026-03-08 00:43:09.313358 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_09c0cde5-d1e2-470c-ab2a-905eda1e5751) 2026-03-08 00:43:09.313366 | orchestrator | 2026-03-08 00:43:09.313375 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:43:09.313383 | orchestrator | Sunday 08 March 2026 00:43:08 +0000 (0:00:00.427) 0:00:52.217 ********** 2026-03-08 00:43:09.313392 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c3f7b7f4-f798-492f-86ac-7ce39be70087) 2026-03-08 00:43:09.313400 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c3f7b7f4-f798-492f-86ac-7ce39be70087) 2026-03-08 00:43:09.313409 | orchestrator | 2026-03-08 00:43:09.313417 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:43:09.313425 | orchestrator | Sunday 08 March 2026 00:43:08 +0000 (0:00:00.428) 0:00:52.645 ********** 2026-03-08 00:43:09.313434 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-08 00:43:09.313442 | orchestrator | 2026-03-08 00:43:09.313451 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:43:09.313459 | orchestrator | Sunday 08 March 2026 00:43:08 +0000 (0:00:00.371) 0:00:53.017 ********** 2026-03-08 00:43:09.313467 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-08 00:43:09.313476 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-08 00:43:09.313485 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-08 00:43:09.313493 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-08 00:43:09.313501 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-08 00:43:09.313510 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-08 00:43:09.313518 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-08 00:43:09.313526 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-08 00:43:09.313535 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-08 00:43:09.313543 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-08 00:43:09.313552 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-08 00:43:09.313567 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-08 00:43:17.947252 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-08 00:43:17.947377 | orchestrator | 2026-03-08 00:43:17.947389 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:43:17.947397 | orchestrator | Sunday 08 March 2026 00:43:09 +0000 (0:00:00.419) 0:00:53.437 ********** 2026-03-08 00:43:17.947405 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:17.947414 | orchestrator | 2026-03-08 00:43:17.947421 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:43:17.947429 | orchestrator | Sunday 08 March 2026 00:43:09 +0000 (0:00:00.215) 0:00:53.652 ********** 2026-03-08 00:43:17.947437 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:17.947444 | orchestrator | 2026-03-08 00:43:17.947452 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:43:17.947459 | orchestrator | Sunday 08 March 2026 00:43:10 +0000 (0:00:00.627) 0:00:54.280 ********** 2026-03-08 00:43:17.947467 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:17.947502 | orchestrator | 2026-03-08 00:43:17.947509 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:43:17.947516 | orchestrator | Sunday 08 March 2026 00:43:10 +0000 (0:00:00.197) 0:00:54.477 ********** 2026-03-08 00:43:17.947523 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:17.947530 | orchestrator | 2026-03-08 00:43:17.947537 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:43:17.947544 | orchestrator | Sunday 08 March 2026 00:43:10 +0000 (0:00:00.218) 0:00:54.696 ********** 2026-03-08 00:43:17.947551 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:17.947558 | orchestrator | 2026-03-08 00:43:17.947565 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:43:17.947572 | orchestrator | Sunday 08 March 2026 00:43:10 +0000 (0:00:00.212) 0:00:54.908 ********** 2026-03-08 00:43:17.947580 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:17.947587 | orchestrator | 2026-03-08 00:43:17.947594 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:43:17.947601 | orchestrator | Sunday 08 March 2026 00:43:10 +0000 (0:00:00.196) 0:00:55.105 ********** 2026-03-08 00:43:17.947608 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:17.947615 | orchestrator | 2026-03-08 00:43:17.947622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:43:17.947629 | orchestrator | Sunday 08 March 2026 00:43:11 +0000 (0:00:00.194) 0:00:55.300 ********** 2026-03-08 00:43:17.947636 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:17.947643 | orchestrator | 2026-03-08 00:43:17.947649 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:43:17.947656 | orchestrator | Sunday 08 March 2026 00:43:11 +0000 (0:00:00.184) 0:00:55.485 ********** 2026-03-08 00:43:17.947664 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-08 00:43:17.947689 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-08 00:43:17.947696 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-08 00:43:17.947702 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-08 00:43:17.947709 | orchestrator | 2026-03-08 00:43:17.947715 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:43:17.947722 | orchestrator | Sunday 08 March 2026 00:43:11 +0000 (0:00:00.623) 0:00:56.108 ********** 2026-03-08 00:43:17.947728 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:17.947735 | orchestrator | 2026-03-08 00:43:17.947742 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:43:17.947749 | orchestrator | Sunday 08 March 2026 00:43:12 +0000 (0:00:00.198) 0:00:56.307 ********** 2026-03-08 00:43:17.947757 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:17.947765 | orchestrator | 2026-03-08 00:43:17.947774 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:43:17.947783 | orchestrator | Sunday 08 March 2026 00:43:12 +0000 (0:00:00.195) 0:00:56.503 ********** 2026-03-08 00:43:17.947792 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:17.947801 | orchestrator | 2026-03-08 00:43:17.947808 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:43:17.947816 | orchestrator | Sunday 08 March 2026 00:43:12 +0000 (0:00:00.179) 0:00:56.682 ********** 2026-03-08 00:43:17.947823 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:17.947830 | orchestrator | 2026-03-08 00:43:17.947838 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-08 00:43:17.947845 | orchestrator | Sunday 08 March 2026 00:43:12 +0000 (0:00:00.205) 0:00:56.887 ********** 2026-03-08 00:43:17.947852 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:17.947859 | orchestrator | 2026-03-08 00:43:17.947867 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-08 00:43:17.947874 | orchestrator | Sunday 08 March 2026 00:43:13 +0000 (0:00:00.320) 0:00:57.208 ********** 2026-03-08 00:43:17.947882 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9742d483-d5c0-528b-aa0f-657894200b45'}}) 2026-03-08 00:43:17.947902 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e5322502-cf2a-5eb6-8fcb-1a734f718f57'}}) 2026-03-08 00:43:17.947910 | orchestrator | 2026-03-08 00:43:17.947917 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-08 00:43:17.947925 | orchestrator | Sunday 08 March 2026 00:43:13 +0000 (0:00:00.171) 0:00:57.380 ********** 2026-03-08 00:43:17.947934 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9742d483-d5c0-528b-aa0f-657894200b45', 'data_vg': 'ceph-9742d483-d5c0-528b-aa0f-657894200b45'}) 2026-03-08 00:43:17.947943 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e5322502-cf2a-5eb6-8fcb-1a734f718f57', 'data_vg': 'ceph-e5322502-cf2a-5eb6-8fcb-1a734f718f57'}) 2026-03-08 00:43:17.947951 | orchestrator | 2026-03-08 00:43:17.947957 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-08 00:43:17.947985 | orchestrator | Sunday 08 March 2026 00:43:15 +0000 (0:00:01.780) 0:00:59.160 ********** 2026-03-08 00:43:17.947994 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9742d483-d5c0-528b-aa0f-657894200b45', 'data_vg': 'ceph-9742d483-d5c0-528b-aa0f-657894200b45'})  2026-03-08 00:43:17.948003 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e5322502-cf2a-5eb6-8fcb-1a734f718f57', 'data_vg': 'ceph-e5322502-cf2a-5eb6-8fcb-1a734f718f57'})  2026-03-08 00:43:17.948010 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:17.948019 | orchestrator | 2026-03-08 00:43:17.948026 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-08 00:43:17.948034 | orchestrator | Sunday 08 March 2026 00:43:15 +0000 (0:00:00.158) 0:00:59.319 ********** 2026-03-08 00:43:17.948042 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9742d483-d5c0-528b-aa0f-657894200b45', 'data_vg': 'ceph-9742d483-d5c0-528b-aa0f-657894200b45'}) 2026-03-08 00:43:17.948050 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e5322502-cf2a-5eb6-8fcb-1a734f718f57', 'data_vg': 'ceph-e5322502-cf2a-5eb6-8fcb-1a734f718f57'}) 2026-03-08 00:43:17.948057 | orchestrator | 2026-03-08 00:43:17.948065 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-08 00:43:17.948074 | orchestrator | Sunday 08 March 2026 00:43:16 +0000 (0:00:01.271) 0:01:00.590 ********** 2026-03-08 00:43:17.948081 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9742d483-d5c0-528b-aa0f-657894200b45', 'data_vg': 'ceph-9742d483-d5c0-528b-aa0f-657894200b45'})  2026-03-08 00:43:17.948089 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e5322502-cf2a-5eb6-8fcb-1a734f718f57', 'data_vg': 'ceph-e5322502-cf2a-5eb6-8fcb-1a734f718f57'})  2026-03-08 00:43:17.948097 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:17.948105 | orchestrator | 2026-03-08 00:43:17.948113 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-08 00:43:17.948120 | orchestrator | Sunday 08 March 2026 00:43:16 +0000 (0:00:00.151) 0:01:00.742 ********** 2026-03-08 00:43:17.948127 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:17.948134 | orchestrator | 2026-03-08 00:43:17.948141 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-08 00:43:17.948149 | orchestrator | Sunday 08 March 2026 00:43:16 +0000 (0:00:00.136) 0:01:00.878 ********** 2026-03-08 00:43:17.948156 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9742d483-d5c0-528b-aa0f-657894200b45', 'data_vg': 'ceph-9742d483-d5c0-528b-aa0f-657894200b45'})  2026-03-08 00:43:17.948172 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e5322502-cf2a-5eb6-8fcb-1a734f718f57', 'data_vg': 'ceph-e5322502-cf2a-5eb6-8fcb-1a734f718f57'})  2026-03-08 00:43:17.948179 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:17.948186 | orchestrator | 2026-03-08 00:43:17.948193 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-08 00:43:17.948200 | orchestrator | Sunday 08 March 2026 00:43:16 +0000 (0:00:00.161) 0:01:01.039 ********** 2026-03-08 00:43:17.948266 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:17.948274 | orchestrator | 2026-03-08 00:43:17.948279 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-08 00:43:17.948286 | orchestrator | Sunday 08 March 2026 00:43:17 +0000 (0:00:00.131) 0:01:01.171 ********** 2026-03-08 00:43:17.948292 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9742d483-d5c0-528b-aa0f-657894200b45', 'data_vg': 'ceph-9742d483-d5c0-528b-aa0f-657894200b45'})  2026-03-08 00:43:17.948299 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e5322502-cf2a-5eb6-8fcb-1a734f718f57', 'data_vg': 'ceph-e5322502-cf2a-5eb6-8fcb-1a734f718f57'})  2026-03-08 00:43:17.948305 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:17.948311 | orchestrator | 2026-03-08 00:43:17.948317 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-08 00:43:17.948323 | orchestrator | Sunday 08 March 2026 00:43:17 +0000 (0:00:00.137) 0:01:01.308 ********** 2026-03-08 00:43:17.948329 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:17.948335 | orchestrator | 2026-03-08 00:43:17.948341 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-08 00:43:17.948348 | orchestrator | Sunday 08 March 2026 00:43:17 +0000 (0:00:00.126) 0:01:01.435 ********** 2026-03-08 00:43:17.948354 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9742d483-d5c0-528b-aa0f-657894200b45', 'data_vg': 'ceph-9742d483-d5c0-528b-aa0f-657894200b45'})  2026-03-08 00:43:17.948360 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e5322502-cf2a-5eb6-8fcb-1a734f718f57', 'data_vg': 'ceph-e5322502-cf2a-5eb6-8fcb-1a734f718f57'})  2026-03-08 00:43:17.948366 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:17.948373 | orchestrator | 2026-03-08 00:43:17.948381 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-08 00:43:17.948388 | orchestrator | Sunday 08 March 2026 00:43:17 +0000 (0:00:00.157) 0:01:01.593 ********** 2026-03-08 00:43:17.948394 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:43:17.948403 | orchestrator | 2026-03-08 00:43:17.948410 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-08 00:43:17.948417 | orchestrator | Sunday 08 March 2026 00:43:17 +0000 (0:00:00.331) 0:01:01.924 ********** 2026-03-08 00:43:17.948434 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9742d483-d5c0-528b-aa0f-657894200b45', 'data_vg': 'ceph-9742d483-d5c0-528b-aa0f-657894200b45'})  2026-03-08 00:43:23.922289 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e5322502-cf2a-5eb6-8fcb-1a734f718f57', 'data_vg': 'ceph-e5322502-cf2a-5eb6-8fcb-1a734f718f57'})  2026-03-08 00:43:23.922404 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:23.922421 | orchestrator | 2026-03-08 00:43:23.922434 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-08 00:43:23.922447 | orchestrator | Sunday 08 March 2026 00:43:17 +0000 (0:00:00.154) 0:01:02.078 ********** 2026-03-08 00:43:23.922459 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9742d483-d5c0-528b-aa0f-657894200b45', 'data_vg': 'ceph-9742d483-d5c0-528b-aa0f-657894200b45'})  2026-03-08 00:43:23.922471 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e5322502-cf2a-5eb6-8fcb-1a734f718f57', 'data_vg': 'ceph-e5322502-cf2a-5eb6-8fcb-1a734f718f57'})  2026-03-08 00:43:23.922482 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:23.922493 | orchestrator | 2026-03-08 00:43:23.922505 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-08 00:43:23.922516 | orchestrator | Sunday 08 March 2026 00:43:18 +0000 (0:00:00.157) 0:01:02.236 ********** 2026-03-08 00:43:23.922527 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9742d483-d5c0-528b-aa0f-657894200b45', 'data_vg': 'ceph-9742d483-d5c0-528b-aa0f-657894200b45'})  2026-03-08 00:43:23.922538 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e5322502-cf2a-5eb6-8fcb-1a734f718f57', 'data_vg': 'ceph-e5322502-cf2a-5eb6-8fcb-1a734f718f57'})  2026-03-08 00:43:23.922576 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:23.922588 | orchestrator | 2026-03-08 00:43:23.922599 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-08 00:43:23.922610 | orchestrator | Sunday 08 March 2026 00:43:18 +0000 (0:00:00.146) 0:01:02.382 ********** 2026-03-08 00:43:23.922620 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:23.922631 | orchestrator | 2026-03-08 00:43:23.922642 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-08 00:43:23.922653 | orchestrator | Sunday 08 March 2026 00:43:18 +0000 (0:00:00.134) 0:01:02.517 ********** 2026-03-08 00:43:23.922663 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:23.922674 | orchestrator | 2026-03-08 00:43:23.922686 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-08 00:43:23.922704 | orchestrator | Sunday 08 March 2026 00:43:18 +0000 (0:00:00.133) 0:01:02.651 ********** 2026-03-08 00:43:23.922720 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:23.922737 | orchestrator | 2026-03-08 00:43:23.922756 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-08 00:43:23.922775 | orchestrator | Sunday 08 March 2026 00:43:18 +0000 (0:00:00.130) 0:01:02.781 ********** 2026-03-08 00:43:23.922794 | orchestrator | ok: [testbed-node-5] => { 2026-03-08 00:43:23.922813 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-08 00:43:23.922832 | orchestrator | } 2026-03-08 00:43:23.922852 | orchestrator | 2026-03-08 00:43:23.922873 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-08 00:43:23.922889 | orchestrator | Sunday 08 March 2026 00:43:18 +0000 (0:00:00.138) 0:01:02.919 ********** 2026-03-08 00:43:23.922902 | orchestrator | ok: [testbed-node-5] => { 2026-03-08 00:43:23.922915 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-08 00:43:23.922927 | orchestrator | } 2026-03-08 00:43:23.922940 | orchestrator | 2026-03-08 00:43:23.922953 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-08 00:43:23.922974 | orchestrator | Sunday 08 March 2026 00:43:18 +0000 (0:00:00.140) 0:01:03.060 ********** 2026-03-08 00:43:23.922991 | orchestrator | ok: [testbed-node-5] => { 2026-03-08 00:43:23.923009 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-08 00:43:23.923026 | orchestrator | } 2026-03-08 00:43:23.923044 | orchestrator | 2026-03-08 00:43:23.923062 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-08 00:43:23.923080 | orchestrator | Sunday 08 March 2026 00:43:19 +0000 (0:00:00.165) 0:01:03.225 ********** 2026-03-08 00:43:23.923099 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:43:23.923117 | orchestrator | 2026-03-08 00:43:23.923135 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-08 00:43:23.923153 | orchestrator | Sunday 08 March 2026 00:43:19 +0000 (0:00:00.504) 0:01:03.729 ********** 2026-03-08 00:43:23.923170 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:43:23.923188 | orchestrator | 2026-03-08 00:43:23.923236 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-08 00:43:23.923258 | orchestrator | Sunday 08 March 2026 00:43:20 +0000 (0:00:00.496) 0:01:04.226 ********** 2026-03-08 00:43:23.923276 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:43:23.923294 | orchestrator | 2026-03-08 00:43:23.923312 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-08 00:43:23.923330 | orchestrator | Sunday 08 March 2026 00:43:20 +0000 (0:00:00.717) 0:01:04.943 ********** 2026-03-08 00:43:23.923349 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:43:23.923369 | orchestrator | 2026-03-08 00:43:23.923389 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-08 00:43:23.923409 | orchestrator | Sunday 08 March 2026 00:43:20 +0000 (0:00:00.151) 0:01:05.094 ********** 2026-03-08 00:43:23.923428 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:23.923446 | orchestrator | 2026-03-08 00:43:23.923465 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-08 00:43:23.923501 | orchestrator | Sunday 08 March 2026 00:43:21 +0000 (0:00:00.112) 0:01:05.207 ********** 2026-03-08 00:43:23.923521 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:23.923540 | orchestrator | 2026-03-08 00:43:23.923559 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-08 00:43:23.923578 | orchestrator | Sunday 08 March 2026 00:43:21 +0000 (0:00:00.111) 0:01:05.318 ********** 2026-03-08 00:43:23.923596 | orchestrator | ok: [testbed-node-5] => { 2026-03-08 00:43:23.923615 | orchestrator |  "vgs_report": { 2026-03-08 00:43:23.923636 | orchestrator |  "vg": [] 2026-03-08 00:43:23.923680 | orchestrator |  } 2026-03-08 00:43:23.923701 | orchestrator | } 2026-03-08 00:43:23.923720 | orchestrator | 2026-03-08 00:43:23.923738 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-08 00:43:23.923757 | orchestrator | Sunday 08 March 2026 00:43:21 +0000 (0:00:00.162) 0:01:05.481 ********** 2026-03-08 00:43:23.923776 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:23.923795 | orchestrator | 2026-03-08 00:43:23.923814 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-08 00:43:23.923833 | orchestrator | Sunday 08 March 2026 00:43:21 +0000 (0:00:00.139) 0:01:05.620 ********** 2026-03-08 00:43:23.923850 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:23.923869 | orchestrator | 2026-03-08 00:43:23.923890 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-08 00:43:23.923908 | orchestrator | Sunday 08 March 2026 00:43:21 +0000 (0:00:00.154) 0:01:05.774 ********** 2026-03-08 00:43:23.923926 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:23.923944 | orchestrator | 2026-03-08 00:43:23.923962 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-08 00:43:23.923980 | orchestrator | Sunday 08 March 2026 00:43:21 +0000 (0:00:00.130) 0:01:05.904 ********** 2026-03-08 00:43:23.924019 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:23.924052 | orchestrator | 2026-03-08 00:43:23.924071 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-08 00:43:23.924089 | orchestrator | Sunday 08 March 2026 00:43:21 +0000 (0:00:00.139) 0:01:06.044 ********** 2026-03-08 00:43:23.924108 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:23.924128 | orchestrator | 2026-03-08 00:43:23.924147 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-08 00:43:23.924167 | orchestrator | Sunday 08 March 2026 00:43:22 +0000 (0:00:00.134) 0:01:06.179 ********** 2026-03-08 00:43:23.924185 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:23.924229 | orchestrator | 2026-03-08 00:43:23.924273 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-08 00:43:23.924294 | orchestrator | Sunday 08 March 2026 00:43:22 +0000 (0:00:00.127) 0:01:06.306 ********** 2026-03-08 00:43:23.924312 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:23.924330 | orchestrator | 2026-03-08 00:43:23.924349 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-08 00:43:23.924367 | orchestrator | Sunday 08 March 2026 00:43:22 +0000 (0:00:00.125) 0:01:06.432 ********** 2026-03-08 00:43:23.924386 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:23.924404 | orchestrator | 2026-03-08 00:43:23.924422 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-08 00:43:23.924440 | orchestrator | Sunday 08 March 2026 00:43:22 +0000 (0:00:00.312) 0:01:06.744 ********** 2026-03-08 00:43:23.924460 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:23.924479 | orchestrator | 2026-03-08 00:43:23.924504 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-08 00:43:23.924523 | orchestrator | Sunday 08 March 2026 00:43:22 +0000 (0:00:00.142) 0:01:06.887 ********** 2026-03-08 00:43:23.924541 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:23.924559 | orchestrator | 2026-03-08 00:43:23.924579 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-08 00:43:23.924597 | orchestrator | Sunday 08 March 2026 00:43:22 +0000 (0:00:00.138) 0:01:07.025 ********** 2026-03-08 00:43:23.924627 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:23.924645 | orchestrator | 2026-03-08 00:43:23.924664 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-08 00:43:23.924684 | orchestrator | Sunday 08 March 2026 00:43:23 +0000 (0:00:00.138) 0:01:07.163 ********** 2026-03-08 00:43:23.924703 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:23.924721 | orchestrator | 2026-03-08 00:43:23.924739 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-08 00:43:23.924758 | orchestrator | Sunday 08 March 2026 00:43:23 +0000 (0:00:00.122) 0:01:07.286 ********** 2026-03-08 00:43:23.924776 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:23.924795 | orchestrator | 2026-03-08 00:43:23.924814 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-08 00:43:23.924832 | orchestrator | Sunday 08 March 2026 00:43:23 +0000 (0:00:00.126) 0:01:07.412 ********** 2026-03-08 00:43:23.924851 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:23.924869 | orchestrator | 2026-03-08 00:43:23.924887 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-08 00:43:23.924905 | orchestrator | Sunday 08 March 2026 00:43:23 +0000 (0:00:00.153) 0:01:07.566 ********** 2026-03-08 00:43:23.924925 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9742d483-d5c0-528b-aa0f-657894200b45', 'data_vg': 'ceph-9742d483-d5c0-528b-aa0f-657894200b45'})  2026-03-08 00:43:23.924944 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e5322502-cf2a-5eb6-8fcb-1a734f718f57', 'data_vg': 'ceph-e5322502-cf2a-5eb6-8fcb-1a734f718f57'})  2026-03-08 00:43:23.924962 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:23.924980 | orchestrator | 2026-03-08 00:43:23.924998 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-08 00:43:23.925016 | orchestrator | Sunday 08 March 2026 00:43:23 +0000 (0:00:00.173) 0:01:07.740 ********** 2026-03-08 00:43:23.925036 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9742d483-d5c0-528b-aa0f-657894200b45', 'data_vg': 'ceph-9742d483-d5c0-528b-aa0f-657894200b45'})  2026-03-08 00:43:23.925055 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e5322502-cf2a-5eb6-8fcb-1a734f718f57', 'data_vg': 'ceph-e5322502-cf2a-5eb6-8fcb-1a734f718f57'})  2026-03-08 00:43:23.925073 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:23.925092 | orchestrator | 2026-03-08 00:43:23.925110 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-08 00:43:23.925128 | orchestrator | Sunday 08 March 2026 00:43:23 +0000 (0:00:00.166) 0:01:07.907 ********** 2026-03-08 00:43:23.925161 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9742d483-d5c0-528b-aa0f-657894200b45', 'data_vg': 'ceph-9742d483-d5c0-528b-aa0f-657894200b45'})  2026-03-08 00:43:26.959335 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e5322502-cf2a-5eb6-8fcb-1a734f718f57', 'data_vg': 'ceph-e5322502-cf2a-5eb6-8fcb-1a734f718f57'})  2026-03-08 00:43:26.959472 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:26.959490 | orchestrator | 2026-03-08 00:43:26.959503 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-08 00:43:26.959516 | orchestrator | Sunday 08 March 2026 00:43:23 +0000 (0:00:00.147) 0:01:08.055 ********** 2026-03-08 00:43:26.959528 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9742d483-d5c0-528b-aa0f-657894200b45', 'data_vg': 'ceph-9742d483-d5c0-528b-aa0f-657894200b45'})  2026-03-08 00:43:26.959539 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e5322502-cf2a-5eb6-8fcb-1a734f718f57', 'data_vg': 'ceph-e5322502-cf2a-5eb6-8fcb-1a734f718f57'})  2026-03-08 00:43:26.959550 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:26.959561 | orchestrator | 2026-03-08 00:43:26.959585 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-08 00:43:26.959638 | orchestrator | Sunday 08 March 2026 00:43:24 +0000 (0:00:00.145) 0:01:08.200 ********** 2026-03-08 00:43:26.959676 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9742d483-d5c0-528b-aa0f-657894200b45', 'data_vg': 'ceph-9742d483-d5c0-528b-aa0f-657894200b45'})  2026-03-08 00:43:26.959688 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e5322502-cf2a-5eb6-8fcb-1a734f718f57', 'data_vg': 'ceph-e5322502-cf2a-5eb6-8fcb-1a734f718f57'})  2026-03-08 00:43:26.959699 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:26.959710 | orchestrator | 2026-03-08 00:43:26.959722 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-08 00:43:26.959733 | orchestrator | Sunday 08 March 2026 00:43:24 +0000 (0:00:00.152) 0:01:08.353 ********** 2026-03-08 00:43:26.959743 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9742d483-d5c0-528b-aa0f-657894200b45', 'data_vg': 'ceph-9742d483-d5c0-528b-aa0f-657894200b45'})  2026-03-08 00:43:26.959754 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e5322502-cf2a-5eb6-8fcb-1a734f718f57', 'data_vg': 'ceph-e5322502-cf2a-5eb6-8fcb-1a734f718f57'})  2026-03-08 00:43:26.959780 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:26.959792 | orchestrator | 2026-03-08 00:43:26.959803 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-08 00:43:26.959815 | orchestrator | Sunday 08 March 2026 00:43:24 +0000 (0:00:00.334) 0:01:08.687 ********** 2026-03-08 00:43:26.959829 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9742d483-d5c0-528b-aa0f-657894200b45', 'data_vg': 'ceph-9742d483-d5c0-528b-aa0f-657894200b45'})  2026-03-08 00:43:26.959842 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e5322502-cf2a-5eb6-8fcb-1a734f718f57', 'data_vg': 'ceph-e5322502-cf2a-5eb6-8fcb-1a734f718f57'})  2026-03-08 00:43:26.959854 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:26.959867 | orchestrator | 2026-03-08 00:43:26.959879 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-08 00:43:26.959891 | orchestrator | Sunday 08 March 2026 00:43:24 +0000 (0:00:00.147) 0:01:08.835 ********** 2026-03-08 00:43:26.959903 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9742d483-d5c0-528b-aa0f-657894200b45', 'data_vg': 'ceph-9742d483-d5c0-528b-aa0f-657894200b45'})  2026-03-08 00:43:26.959916 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e5322502-cf2a-5eb6-8fcb-1a734f718f57', 'data_vg': 'ceph-e5322502-cf2a-5eb6-8fcb-1a734f718f57'})  2026-03-08 00:43:26.959929 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:26.959941 | orchestrator | 2026-03-08 00:43:26.959955 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-08 00:43:26.959967 | orchestrator | Sunday 08 March 2026 00:43:24 +0000 (0:00:00.153) 0:01:08.988 ********** 2026-03-08 00:43:26.959980 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:43:26.959992 | orchestrator | 2026-03-08 00:43:26.960002 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-08 00:43:26.960013 | orchestrator | Sunday 08 March 2026 00:43:25 +0000 (0:00:00.516) 0:01:09.505 ********** 2026-03-08 00:43:26.960024 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:43:26.960034 | orchestrator | 2026-03-08 00:43:26.960045 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-08 00:43:26.960056 | orchestrator | Sunday 08 March 2026 00:43:25 +0000 (0:00:00.534) 0:01:10.040 ********** 2026-03-08 00:43:26.960066 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:43:26.960077 | orchestrator | 2026-03-08 00:43:26.960087 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-08 00:43:26.960098 | orchestrator | Sunday 08 March 2026 00:43:26 +0000 (0:00:00.155) 0:01:10.195 ********** 2026-03-08 00:43:26.960109 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-9742d483-d5c0-528b-aa0f-657894200b45', 'vg_name': 'ceph-9742d483-d5c0-528b-aa0f-657894200b45'}) 2026-03-08 00:43:26.960121 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-e5322502-cf2a-5eb6-8fcb-1a734f718f57', 'vg_name': 'ceph-e5322502-cf2a-5eb6-8fcb-1a734f718f57'}) 2026-03-08 00:43:26.960139 | orchestrator | 2026-03-08 00:43:26.960149 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-08 00:43:26.960160 | orchestrator | Sunday 08 March 2026 00:43:26 +0000 (0:00:00.169) 0:01:10.365 ********** 2026-03-08 00:43:26.960191 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9742d483-d5c0-528b-aa0f-657894200b45', 'data_vg': 'ceph-9742d483-d5c0-528b-aa0f-657894200b45'})  2026-03-08 00:43:26.960224 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e5322502-cf2a-5eb6-8fcb-1a734f718f57', 'data_vg': 'ceph-e5322502-cf2a-5eb6-8fcb-1a734f718f57'})  2026-03-08 00:43:26.960236 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:26.960246 | orchestrator | 2026-03-08 00:43:26.960257 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-08 00:43:26.960268 | orchestrator | Sunday 08 March 2026 00:43:26 +0000 (0:00:00.160) 0:01:10.526 ********** 2026-03-08 00:43:26.960279 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9742d483-d5c0-528b-aa0f-657894200b45', 'data_vg': 'ceph-9742d483-d5c0-528b-aa0f-657894200b45'})  2026-03-08 00:43:26.960290 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e5322502-cf2a-5eb6-8fcb-1a734f718f57', 'data_vg': 'ceph-e5322502-cf2a-5eb6-8fcb-1a734f718f57'})  2026-03-08 00:43:26.960301 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:26.960312 | orchestrator | 2026-03-08 00:43:26.960323 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-08 00:43:26.960333 | orchestrator | Sunday 08 March 2026 00:43:26 +0000 (0:00:00.156) 0:01:10.682 ********** 2026-03-08 00:43:26.960344 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9742d483-d5c0-528b-aa0f-657894200b45', 'data_vg': 'ceph-9742d483-d5c0-528b-aa0f-657894200b45'})  2026-03-08 00:43:26.960355 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e5322502-cf2a-5eb6-8fcb-1a734f718f57', 'data_vg': 'ceph-e5322502-cf2a-5eb6-8fcb-1a734f718f57'})  2026-03-08 00:43:26.960366 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:26.960377 | orchestrator | 2026-03-08 00:43:26.960387 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-08 00:43:26.960398 | orchestrator | Sunday 08 March 2026 00:43:26 +0000 (0:00:00.211) 0:01:10.893 ********** 2026-03-08 00:43:26.960409 | orchestrator | ok: [testbed-node-5] => { 2026-03-08 00:43:26.960420 | orchestrator |  "lvm_report": { 2026-03-08 00:43:26.960432 | orchestrator |  "lv": [ 2026-03-08 00:43:26.960443 | orchestrator |  { 2026-03-08 00:43:26.960454 | orchestrator |  "lv_name": "osd-block-9742d483-d5c0-528b-aa0f-657894200b45", 2026-03-08 00:43:26.960470 | orchestrator |  "vg_name": "ceph-9742d483-d5c0-528b-aa0f-657894200b45" 2026-03-08 00:43:26.960482 | orchestrator |  }, 2026-03-08 00:43:26.960492 | orchestrator |  { 2026-03-08 00:43:26.960503 | orchestrator |  "lv_name": "osd-block-e5322502-cf2a-5eb6-8fcb-1a734f718f57", 2026-03-08 00:43:26.960514 | orchestrator |  "vg_name": "ceph-e5322502-cf2a-5eb6-8fcb-1a734f718f57" 2026-03-08 00:43:26.960525 | orchestrator |  } 2026-03-08 00:43:26.960536 | orchestrator |  ], 2026-03-08 00:43:26.960546 | orchestrator |  "pv": [ 2026-03-08 00:43:26.960557 | orchestrator |  { 2026-03-08 00:43:26.960568 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-08 00:43:26.960579 | orchestrator |  "vg_name": "ceph-9742d483-d5c0-528b-aa0f-657894200b45" 2026-03-08 00:43:26.960590 | orchestrator |  }, 2026-03-08 00:43:26.960601 | orchestrator |  { 2026-03-08 00:43:26.960611 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-08 00:43:26.960629 | orchestrator |  "vg_name": "ceph-e5322502-cf2a-5eb6-8fcb-1a734f718f57" 2026-03-08 00:43:26.960647 | orchestrator |  } 2026-03-08 00:43:26.960659 | orchestrator |  ] 2026-03-08 00:43:26.960669 | orchestrator |  } 2026-03-08 00:43:26.960680 | orchestrator | } 2026-03-08 00:43:26.960699 | orchestrator | 2026-03-08 00:43:26.960710 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:43:26.960720 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-08 00:43:26.960731 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-08 00:43:26.960741 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-08 00:43:26.960752 | orchestrator | 2026-03-08 00:43:26.960762 | orchestrator | 2026-03-08 00:43:26.960773 | orchestrator | 2026-03-08 00:43:26.960783 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:43:26.960794 | orchestrator | Sunday 08 March 2026 00:43:26 +0000 (0:00:00.164) 0:01:11.058 ********** 2026-03-08 00:43:26.960804 | orchestrator | =============================================================================== 2026-03-08 00:43:26.960815 | orchestrator | Create block VGs -------------------------------------------------------- 5.52s 2026-03-08 00:43:26.960825 | orchestrator | Create block LVs -------------------------------------------------------- 3.97s 2026-03-08 00:43:26.960836 | orchestrator | Add known partitions to the list of available block devices ------------- 1.72s 2026-03-08 00:43:26.960846 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.67s 2026-03-08 00:43:26.960857 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.58s 2026-03-08 00:43:26.960867 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.51s 2026-03-08 00:43:26.960878 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.51s 2026-03-08 00:43:26.960889 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.44s 2026-03-08 00:43:26.960906 | orchestrator | Add known links to the list of available block devices ------------------ 1.42s 2026-03-08 00:43:27.332227 | orchestrator | Print LVM report data --------------------------------------------------- 1.03s 2026-03-08 00:43:27.332319 | orchestrator | Add known partitions to the list of available block devices ------------- 1.03s 2026-03-08 00:43:27.332330 | orchestrator | Add known partitions to the list of available block devices ------------- 0.90s 2026-03-08 00:43:27.332338 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.76s 2026-03-08 00:43:27.332345 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.73s 2026-03-08 00:43:27.332353 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2026-03-08 00:43:27.332360 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2026-03-08 00:43:27.332368 | orchestrator | Print 'Create WAL LVs for ceph_db_wal_devices' -------------------------- 0.69s 2026-03-08 00:43:27.332375 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2026-03-08 00:43:27.332382 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2026-03-08 00:43:27.332390 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-03-08 00:43:40.051924 | orchestrator | 2026-03-08 00:43:40 | INFO  | Task 07363de3-3299-4e3b-a726-e6fa290501e7 (facts) was prepared for execution. 2026-03-08 00:43:40.052002 | orchestrator | 2026-03-08 00:43:40 | INFO  | It takes a moment until task 07363de3-3299-4e3b-a726-e6fa290501e7 (facts) has been started and output is visible here. 2026-03-08 00:43:51.780015 | orchestrator | 2026-03-08 00:43:51.780128 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-08 00:43:51.780145 | orchestrator | 2026-03-08 00:43:51.780158 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-08 00:43:51.780169 | orchestrator | Sunday 08 March 2026 00:43:44 +0000 (0:00:00.241) 0:00:00.241 ********** 2026-03-08 00:43:51.780294 | orchestrator | ok: [testbed-manager] 2026-03-08 00:43:51.780310 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:43:51.780321 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:43:51.780332 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:43:51.780342 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:43:51.780353 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:43:51.780363 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:43:51.780374 | orchestrator | 2026-03-08 00:43:51.780385 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-08 00:43:51.780410 | orchestrator | Sunday 08 March 2026 00:43:45 +0000 (0:00:00.938) 0:00:01.180 ********** 2026-03-08 00:43:51.780422 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:43:51.780434 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:43:51.780445 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:43:51.780456 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:43:51.780466 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:43:51.780476 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:51.780487 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:51.780498 | orchestrator | 2026-03-08 00:43:51.780508 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-08 00:43:51.780519 | orchestrator | 2026-03-08 00:43:51.780530 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-08 00:43:51.780541 | orchestrator | Sunday 08 March 2026 00:43:46 +0000 (0:00:01.116) 0:00:02.296 ********** 2026-03-08 00:43:51.780551 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:43:51.780565 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:43:51.780577 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:43:51.780590 | orchestrator | ok: [testbed-manager] 2026-03-08 00:43:51.780602 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:43:51.780614 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:43:51.780627 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:43:51.780639 | orchestrator | 2026-03-08 00:43:51.780651 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-08 00:43:51.780663 | orchestrator | 2026-03-08 00:43:51.780675 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-08 00:43:51.780688 | orchestrator | Sunday 08 March 2026 00:43:50 +0000 (0:00:04.635) 0:00:06.932 ********** 2026-03-08 00:43:51.780700 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:43:51.780712 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:43:51.780725 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:43:51.780737 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:43:51.780749 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:43:51.780761 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:51.780773 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:51.780786 | orchestrator | 2026-03-08 00:43:51.780798 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:43:51.780811 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:43:51.780825 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:43:51.780838 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:43:51.780851 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:43:51.780864 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:43:51.780877 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:43:51.780889 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:43:51.780910 | orchestrator | 2026-03-08 00:43:51.780923 | orchestrator | 2026-03-08 00:43:51.780934 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:43:51.780945 | orchestrator | Sunday 08 March 2026 00:43:51 +0000 (0:00:00.504) 0:00:07.436 ********** 2026-03-08 00:43:51.780956 | orchestrator | =============================================================================== 2026-03-08 00:43:51.780967 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.64s 2026-03-08 00:43:51.780978 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.12s 2026-03-08 00:43:51.780989 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.94s 2026-03-08 00:43:51.781000 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2026-03-08 00:44:04.135092 | orchestrator | 2026-03-08 00:44:04 | INFO  | Task f552e675-2a85-4a9f-ba46-2b44e298657b (frr) was prepared for execution. 2026-03-08 00:44:04.135252 | orchestrator | 2026-03-08 00:44:04 | INFO  | It takes a moment until task f552e675-2a85-4a9f-ba46-2b44e298657b (frr) has been started and output is visible here. 2026-03-08 00:44:29.341086 | orchestrator | 2026-03-08 00:44:29.341198 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-08 00:44:29.341207 | orchestrator | 2026-03-08 00:44:29.341211 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-08 00:44:29.341216 | orchestrator | Sunday 08 March 2026 00:44:08 +0000 (0:00:00.209) 0:00:00.209 ********** 2026-03-08 00:44:29.341221 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-08 00:44:29.341226 | orchestrator | 2026-03-08 00:44:29.341230 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-08 00:44:29.341234 | orchestrator | Sunday 08 March 2026 00:44:08 +0000 (0:00:00.199) 0:00:00.409 ********** 2026-03-08 00:44:29.341239 | orchestrator | changed: [testbed-manager] 2026-03-08 00:44:29.341244 | orchestrator | 2026-03-08 00:44:29.341247 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-08 00:44:29.341251 | orchestrator | Sunday 08 March 2026 00:44:09 +0000 (0:00:00.995) 0:00:01.405 ********** 2026-03-08 00:44:29.341256 | orchestrator | changed: [testbed-manager] 2026-03-08 00:44:29.341259 | orchestrator | 2026-03-08 00:44:29.341263 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-08 00:44:29.341268 | orchestrator | Sunday 08 March 2026 00:44:18 +0000 (0:00:09.109) 0:00:10.515 ********** 2026-03-08 00:44:29.341272 | orchestrator | ok: [testbed-manager] 2026-03-08 00:44:29.341276 | orchestrator | 2026-03-08 00:44:29.341281 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-08 00:44:29.341285 | orchestrator | Sunday 08 March 2026 00:44:19 +0000 (0:00:01.025) 0:00:11.540 ********** 2026-03-08 00:44:29.341289 | orchestrator | changed: [testbed-manager] 2026-03-08 00:44:29.341293 | orchestrator | 2026-03-08 00:44:29.341296 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-08 00:44:29.341300 | orchestrator | Sunday 08 March 2026 00:44:20 +0000 (0:00:00.909) 0:00:12.449 ********** 2026-03-08 00:44:29.341304 | orchestrator | ok: [testbed-manager] 2026-03-08 00:44:29.341308 | orchestrator | 2026-03-08 00:44:29.341312 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-08 00:44:29.341317 | orchestrator | Sunday 08 March 2026 00:44:21 +0000 (0:00:01.126) 0:00:13.575 ********** 2026-03-08 00:44:29.341321 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:44:29.341325 | orchestrator | 2026-03-08 00:44:29.341329 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-08 00:44:29.341333 | orchestrator | Sunday 08 March 2026 00:44:21 +0000 (0:00:00.136) 0:00:13.712 ********** 2026-03-08 00:44:29.341344 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:44:29.341360 | orchestrator | 2026-03-08 00:44:29.341364 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-08 00:44:29.341368 | orchestrator | Sunday 08 March 2026 00:44:21 +0000 (0:00:00.157) 0:00:13.870 ********** 2026-03-08 00:44:29.341372 | orchestrator | changed: [testbed-manager] 2026-03-08 00:44:29.341376 | orchestrator | 2026-03-08 00:44:29.341380 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-08 00:44:29.341384 | orchestrator | Sunday 08 March 2026 00:44:23 +0000 (0:00:01.988) 0:00:15.859 ********** 2026-03-08 00:44:29.341388 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-08 00:44:29.341392 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-08 00:44:29.341396 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-08 00:44:29.341400 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-08 00:44:29.341404 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-08 00:44:29.341408 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-08 00:44:29.341412 | orchestrator | 2026-03-08 00:44:29.341416 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-08 00:44:29.341420 | orchestrator | Sunday 08 March 2026 00:44:26 +0000 (0:00:02.225) 0:00:18.084 ********** 2026-03-08 00:44:29.341424 | orchestrator | ok: [testbed-manager] 2026-03-08 00:44:29.341428 | orchestrator | 2026-03-08 00:44:29.341432 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-08 00:44:29.341436 | orchestrator | Sunday 08 March 2026 00:44:27 +0000 (0:00:01.569) 0:00:19.653 ********** 2026-03-08 00:44:29.341439 | orchestrator | changed: [testbed-manager] 2026-03-08 00:44:29.341443 | orchestrator | 2026-03-08 00:44:29.341447 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:44:29.341451 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:44:29.341455 | orchestrator | 2026-03-08 00:44:29.341459 | orchestrator | 2026-03-08 00:44:29.341463 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:44:29.341467 | orchestrator | Sunday 08 March 2026 00:44:29 +0000 (0:00:01.405) 0:00:21.058 ********** 2026-03-08 00:44:29.341471 | orchestrator | =============================================================================== 2026-03-08 00:44:29.341475 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.11s 2026-03-08 00:44:29.341478 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.23s 2026-03-08 00:44:29.341482 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.99s 2026-03-08 00:44:29.341486 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.57s 2026-03-08 00:44:29.341490 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.41s 2026-03-08 00:44:29.341503 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.13s 2026-03-08 00:44:29.341507 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.03s 2026-03-08 00:44:29.341511 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.00s 2026-03-08 00:44:29.341515 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.91s 2026-03-08 00:44:29.341519 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.20s 2026-03-08 00:44:29.341523 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.16s 2026-03-08 00:44:29.341527 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.14s 2026-03-08 00:44:29.636580 | orchestrator | 2026-03-08 00:44:29.638190 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sun Mar 8 00:44:29 UTC 2026 2026-03-08 00:44:29.638249 | orchestrator | 2026-03-08 00:44:31.540875 | orchestrator | 2026-03-08 00:44:31 | INFO  | Collection nutshell is prepared for execution 2026-03-08 00:44:31.540986 | orchestrator | 2026-03-08 00:44:31 | INFO  | A [0] - dotfiles 2026-03-08 00:44:41.590340 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [0] - homer 2026-03-08 00:44:41.590480 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [0] - netdata 2026-03-08 00:44:41.590489 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [0] - openstackclient 2026-03-08 00:44:41.590494 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [0] - phpmyadmin 2026-03-08 00:44:41.590499 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [0] - common 2026-03-08 00:44:41.595249 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [1] -- loadbalancer 2026-03-08 00:44:41.595373 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [2] --- opensearch 2026-03-08 00:44:41.595393 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [2] --- mariadb-ng 2026-03-08 00:44:41.595401 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [3] ---- horizon 2026-03-08 00:44:41.595408 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [3] ---- keystone 2026-03-08 00:44:41.595833 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [4] ----- neutron 2026-03-08 00:44:41.595857 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [5] ------ wait-for-nova 2026-03-08 00:44:41.596101 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [6] ------- octavia 2026-03-08 00:44:41.598423 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [4] ----- barbican 2026-03-08 00:44:41.598470 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [4] ----- designate 2026-03-08 00:44:41.598478 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [4] ----- ironic 2026-03-08 00:44:41.598644 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [4] ----- placement 2026-03-08 00:44:41.598661 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [4] ----- magnum 2026-03-08 00:44:41.599587 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [1] -- openvswitch 2026-03-08 00:44:41.599620 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [2] --- ovn 2026-03-08 00:44:41.599815 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [1] -- memcached 2026-03-08 00:44:41.599829 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [1] -- redis 2026-03-08 00:44:41.600221 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [1] -- rabbitmq-ng 2026-03-08 00:44:41.600489 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [0] - kubernetes 2026-03-08 00:44:41.603117 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [1] -- kubeconfig 2026-03-08 00:44:41.603180 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [1] -- copy-kubeconfig 2026-03-08 00:44:41.603446 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [0] - ceph 2026-03-08 00:44:41.605704 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [1] -- ceph-pools 2026-03-08 00:44:41.605765 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [2] --- copy-ceph-keys 2026-03-08 00:44:41.605772 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [3] ---- cephclient 2026-03-08 00:44:41.605781 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-08 00:44:41.605953 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [4] ----- wait-for-keystone 2026-03-08 00:44:41.607633 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-08 00:44:41.607697 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [5] ------ glance 2026-03-08 00:44:41.607710 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [5] ------ cinder 2026-03-08 00:44:41.607746 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [5] ------ nova 2026-03-08 00:44:41.607756 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [4] ----- prometheus 2026-03-08 00:44:41.607766 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [5] ------ grafana 2026-03-08 00:44:41.818448 | orchestrator | 2026-03-08 00:44:41 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-08 00:44:41.818547 | orchestrator | 2026-03-08 00:44:41 | INFO  | Tasks are running in the background 2026-03-08 00:44:44.669865 | orchestrator | 2026-03-08 00:44:44 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-08 00:44:46.769303 | orchestrator | 2026-03-08 00:44:46 | INFO  | Task e8407255-b5d5-4625-bebd-6d13bba3841c is in state STARTED 2026-03-08 00:44:46.769573 | orchestrator | 2026-03-08 00:44:46 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:44:46.770155 | orchestrator | 2026-03-08 00:44:46 | INFO  | Task 90939db2-d255-49e2-87ea-6fb3d1fbdb49 is in state STARTED 2026-03-08 00:44:46.770837 | orchestrator | 2026-03-08 00:44:46 | INFO  | Task 6c9662bd-0a48-4c07-978b-bc7b51cbe6db is in state STARTED 2026-03-08 00:44:46.771582 | orchestrator | 2026-03-08 00:44:46 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:44:46.773466 | orchestrator | 2026-03-08 00:44:46 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:44:46.774104 | orchestrator | 2026-03-08 00:44:46 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:44:46.774290 | orchestrator | 2026-03-08 00:44:46 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:44:49.819817 | orchestrator | 2026-03-08 00:44:49 | INFO  | Task e8407255-b5d5-4625-bebd-6d13bba3841c is in state STARTED 2026-03-08 00:44:49.820090 | orchestrator | 2026-03-08 00:44:49 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:44:49.820727 | orchestrator | 2026-03-08 00:44:49 | INFO  | Task 90939db2-d255-49e2-87ea-6fb3d1fbdb49 is in state STARTED 2026-03-08 00:44:49.821782 | orchestrator | 2026-03-08 00:44:49 | INFO  | Task 6c9662bd-0a48-4c07-978b-bc7b51cbe6db is in state STARTED 2026-03-08 00:44:49.822475 | orchestrator | 2026-03-08 00:44:49 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:44:49.822926 | orchestrator | 2026-03-08 00:44:49 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:44:49.827079 | orchestrator | 2026-03-08 00:44:49 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:44:49.827178 | orchestrator | 2026-03-08 00:44:49 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:44:52.857472 | orchestrator | 2026-03-08 00:44:52 | INFO  | Task e8407255-b5d5-4625-bebd-6d13bba3841c is in state STARTED 2026-03-08 00:44:52.857782 | orchestrator | 2026-03-08 00:44:52 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:44:52.858183 | orchestrator | 2026-03-08 00:44:52 | INFO  | Task 90939db2-d255-49e2-87ea-6fb3d1fbdb49 is in state STARTED 2026-03-08 00:44:52.873812 | orchestrator | 2026-03-08 00:44:52 | INFO  | Task 6c9662bd-0a48-4c07-978b-bc7b51cbe6db is in state STARTED 2026-03-08 00:44:52.873877 | orchestrator | 2026-03-08 00:44:52 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:44:52.873883 | orchestrator | 2026-03-08 00:44:52 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:44:52.873888 | orchestrator | 2026-03-08 00:44:52 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:44:52.873913 | orchestrator | 2026-03-08 00:44:52 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:44:56.014414 | orchestrator | 2026-03-08 00:44:55 | INFO  | Task e8407255-b5d5-4625-bebd-6d13bba3841c is in state STARTED 2026-03-08 00:44:56.014513 | orchestrator | 2026-03-08 00:44:55 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:44:56.014527 | orchestrator | 2026-03-08 00:44:55 | INFO  | Task 90939db2-d255-49e2-87ea-6fb3d1fbdb49 is in state STARTED 2026-03-08 00:44:56.014538 | orchestrator | 2026-03-08 00:44:55 | INFO  | Task 6c9662bd-0a48-4c07-978b-bc7b51cbe6db is in state STARTED 2026-03-08 00:44:56.014548 | orchestrator | 2026-03-08 00:44:55 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:44:56.014557 | orchestrator | 2026-03-08 00:44:55 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:44:56.014567 | orchestrator | 2026-03-08 00:44:55 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:44:56.014577 | orchestrator | 2026-03-08 00:44:55 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:44:58.979970 | orchestrator | 2026-03-08 00:44:58 | INFO  | Task e8407255-b5d5-4625-bebd-6d13bba3841c is in state STARTED 2026-03-08 00:44:58.980893 | orchestrator | 2026-03-08 00:44:58 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:44:58.984895 | orchestrator | 2026-03-08 00:44:58 | INFO  | Task 90939db2-d255-49e2-87ea-6fb3d1fbdb49 is in state STARTED 2026-03-08 00:44:58.987485 | orchestrator | 2026-03-08 00:44:58 | INFO  | Task 6c9662bd-0a48-4c07-978b-bc7b51cbe6db is in state STARTED 2026-03-08 00:44:58.991095 | orchestrator | 2026-03-08 00:44:58 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:44:58.992607 | orchestrator | 2026-03-08 00:44:58 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:44:58.995399 | orchestrator | 2026-03-08 00:44:58 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:44:58.995479 | orchestrator | 2026-03-08 00:44:58 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:02.217904 | orchestrator | 2026-03-08 00:45:02 | INFO  | Task e8407255-b5d5-4625-bebd-6d13bba3841c is in state STARTED 2026-03-08 00:45:02.217984 | orchestrator | 2026-03-08 00:45:02 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:45:02.217990 | orchestrator | 2026-03-08 00:45:02 | INFO  | Task 90939db2-d255-49e2-87ea-6fb3d1fbdb49 is in state STARTED 2026-03-08 00:45:02.217995 | orchestrator | 2026-03-08 00:45:02 | INFO  | Task 6c9662bd-0a48-4c07-978b-bc7b51cbe6db is in state STARTED 2026-03-08 00:45:02.217999 | orchestrator | 2026-03-08 00:45:02 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:45:02.218003 | orchestrator | 2026-03-08 00:45:02 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:45:02.218006 | orchestrator | 2026-03-08 00:45:02 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:45:02.218011 | orchestrator | 2026-03-08 00:45:02 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:05.346314 | orchestrator | 2026-03-08 00:45:05 | INFO  | Task e8407255-b5d5-4625-bebd-6d13bba3841c is in state STARTED 2026-03-08 00:45:05.346435 | orchestrator | 2026-03-08 00:45:05 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:45:05.347644 | orchestrator | 2026-03-08 00:45:05 | INFO  | Task 90939db2-d255-49e2-87ea-6fb3d1fbdb49 is in state STARTED 2026-03-08 00:45:05.348366 | orchestrator | 2026-03-08 00:45:05 | INFO  | Task 6c9662bd-0a48-4c07-978b-bc7b51cbe6db is in state STARTED 2026-03-08 00:45:05.349250 | orchestrator | 2026-03-08 00:45:05 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:45:05.349559 | orchestrator | 2026-03-08 00:45:05 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:45:05.350705 | orchestrator | 2026-03-08 00:45:05 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:45:05.350757 | orchestrator | 2026-03-08 00:45:05 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:08.454795 | orchestrator | 2026-03-08 00:45:08 | INFO  | Task e8407255-b5d5-4625-bebd-6d13bba3841c is in state STARTED 2026-03-08 00:45:08.457499 | orchestrator | 2026-03-08 00:45:08 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:45:08.457580 | orchestrator | 2026-03-08 00:45:08 | INFO  | Task 90939db2-d255-49e2-87ea-6fb3d1fbdb49 is in state STARTED 2026-03-08 00:45:08.458301 | orchestrator | 2026-03-08 00:45:08.458346 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-08 00:45:08.458355 | orchestrator | 2026-03-08 00:45:08.458362 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-08 00:45:08.458369 | orchestrator | Sunday 08 March 2026 00:44:54 +0000 (0:00:00.272) 0:00:00.273 ********** 2026-03-08 00:45:08.458376 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:45:08.458384 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:45:08.458391 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:45:08.458398 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:45:08.458405 | orchestrator | changed: [testbed-manager] 2026-03-08 00:45:08.458411 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:45:08.458417 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:45:08.458423 | orchestrator | 2026-03-08 00:45:08.458429 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-08 00:45:08.458436 | orchestrator | Sunday 08 March 2026 00:44:58 +0000 (0:00:03.736) 0:00:04.009 ********** 2026-03-08 00:45:08.458443 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-08 00:45:08.458450 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-08 00:45:08.458457 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-08 00:45:08.458463 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-08 00:45:08.458470 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-08 00:45:08.458477 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-08 00:45:08.458485 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-08 00:45:08.458491 | orchestrator | 2026-03-08 00:45:08.458498 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-08 00:45:08.458506 | orchestrator | Sunday 08 March 2026 00:45:00 +0000 (0:00:01.695) 0:00:05.704 ********** 2026-03-08 00:45:08.458523 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-08 00:44:59.815216', 'end': '2026-03-08 00:44:59.823636', 'delta': '0:00:00.008420', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-08 00:45:08.458534 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-08 00:44:59.355886', 'end': '2026-03-08 00:44:59.362462', 'delta': '0:00:00.006576', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-08 00:45:08.458569 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-08 00:44:59.410147', 'end': '2026-03-08 00:44:59.418252', 'delta': '0:00:00.008105', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-08 00:45:08.458598 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-08 00:44:59.811654', 'end': '2026-03-08 00:44:59.820042', 'delta': '0:00:00.008388', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-08 00:45:08.458606 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-08 00:44:59.952096', 'end': '2026-03-08 00:44:59.959310', 'delta': '0:00:00.007214', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-08 00:45:08.458612 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-08 00:44:59.955564', 'end': '2026-03-08 00:44:59.962706', 'delta': '0:00:00.007142', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-08 00:45:08.458843 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-08 00:45:00.131943', 'end': '2026-03-08 00:45:00.139450', 'delta': '0:00:00.007507', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-08 00:45:08.458861 | orchestrator | 2026-03-08 00:45:08.458869 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-08 00:45:08.458876 | orchestrator | Sunday 08 March 2026 00:45:03 +0000 (0:00:02.698) 0:00:08.403 ********** 2026-03-08 00:45:08.458884 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-08 00:45:08.458890 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-08 00:45:08.458896 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-08 00:45:08.458903 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-08 00:45:08.458909 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-08 00:45:08.458915 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-08 00:45:08.458921 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-08 00:45:08.458927 | orchestrator | 2026-03-08 00:45:08.458933 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-08 00:45:08.458939 | orchestrator | Sunday 08 March 2026 00:45:05 +0000 (0:00:02.001) 0:00:10.405 ********** 2026-03-08 00:45:08.458946 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-08 00:45:08.458954 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-08 00:45:08.458961 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-08 00:45:08.458968 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-08 00:45:08.458974 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-08 00:45:08.458980 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-08 00:45:08.458987 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-08 00:45:08.458994 | orchestrator | 2026-03-08 00:45:08.459000 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:45:08.459016 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:45:08.459024 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:45:08.459029 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:45:08.459035 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:45:08.459043 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:45:08.459049 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:45:08.459058 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:45:08.459064 | orchestrator | 2026-03-08 00:45:08.459071 | orchestrator | 2026-03-08 00:45:08.459086 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:45:08.459093 | orchestrator | Sunday 08 March 2026 00:45:07 +0000 (0:00:02.890) 0:00:13.295 ********** 2026-03-08 00:45:08.459100 | orchestrator | =============================================================================== 2026-03-08 00:45:08.459106 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.74s 2026-03-08 00:45:08.459153 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.89s 2026-03-08 00:45:08.459165 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.70s 2026-03-08 00:45:08.459173 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.00s 2026-03-08 00:45:08.459182 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.70s 2026-03-08 00:45:08.459192 | orchestrator | 2026-03-08 00:45:08 | INFO  | Task 6c9662bd-0a48-4c07-978b-bc7b51cbe6db is in state SUCCESS 2026-03-08 00:45:08.464406 | orchestrator | 2026-03-08 00:45:08 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:45:08.470343 | orchestrator | 2026-03-08 00:45:08 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:45:08.473339 | orchestrator | 2026-03-08 00:45:08 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:45:08.478199 | orchestrator | 2026-03-08 00:45:08 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:11.699473 | orchestrator | 2026-03-08 00:45:11 | INFO  | Task e8407255-b5d5-4625-bebd-6d13bba3841c is in state STARTED 2026-03-08 00:45:11.699579 | orchestrator | 2026-03-08 00:45:11 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:45:11.730425 | orchestrator | 2026-03-08 00:45:11 | INFO  | Task 90939db2-d255-49e2-87ea-6fb3d1fbdb49 is in state STARTED 2026-03-08 00:45:11.737216 | orchestrator | 2026-03-08 00:45:11 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:45:11.738677 | orchestrator | 2026-03-08 00:45:11 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:45:11.741013 | orchestrator | 2026-03-08 00:45:11 | INFO  | Task 27a3a4fc-818e-4ce7-9365-7aaa3a2fa6b5 is in state STARTED 2026-03-08 00:45:11.744688 | orchestrator | 2026-03-08 00:45:11 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:45:11.744748 | orchestrator | 2026-03-08 00:45:11 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:14.800965 | orchestrator | 2026-03-08 00:45:14 | INFO  | Task e8407255-b5d5-4625-bebd-6d13bba3841c is in state STARTED 2026-03-08 00:45:14.801324 | orchestrator | 2026-03-08 00:45:14 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:45:14.805013 | orchestrator | 2026-03-08 00:45:14 | INFO  | Task 90939db2-d255-49e2-87ea-6fb3d1fbdb49 is in state STARTED 2026-03-08 00:45:14.808277 | orchestrator | 2026-03-08 00:45:14 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:45:14.809265 | orchestrator | 2026-03-08 00:45:14 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:45:14.809674 | orchestrator | 2026-03-08 00:45:14 | INFO  | Task 27a3a4fc-818e-4ce7-9365-7aaa3a2fa6b5 is in state STARTED 2026-03-08 00:45:14.813341 | orchestrator | 2026-03-08 00:45:14 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:45:14.813401 | orchestrator | 2026-03-08 00:45:14 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:17.851814 | orchestrator | 2026-03-08 00:45:17 | INFO  | Task e8407255-b5d5-4625-bebd-6d13bba3841c is in state STARTED 2026-03-08 00:45:17.852242 | orchestrator | 2026-03-08 00:45:17 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:45:17.854499 | orchestrator | 2026-03-08 00:45:17 | INFO  | Task 90939db2-d255-49e2-87ea-6fb3d1fbdb49 is in state STARTED 2026-03-08 00:45:17.856401 | orchestrator | 2026-03-08 00:45:17 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:45:17.858000 | orchestrator | 2026-03-08 00:45:17 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:45:17.925917 | orchestrator | 2026-03-08 00:45:17 | INFO  | Task 27a3a4fc-818e-4ce7-9365-7aaa3a2fa6b5 is in state STARTED 2026-03-08 00:45:17.925992 | orchestrator | 2026-03-08 00:45:17 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:45:17.926001 | orchestrator | 2026-03-08 00:45:17 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:20.904265 | orchestrator | 2026-03-08 00:45:20 | INFO  | Task e8407255-b5d5-4625-bebd-6d13bba3841c is in state STARTED 2026-03-08 00:45:20.905619 | orchestrator | 2026-03-08 00:45:20 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:45:20.907696 | orchestrator | 2026-03-08 00:45:20 | INFO  | Task 90939db2-d255-49e2-87ea-6fb3d1fbdb49 is in state STARTED 2026-03-08 00:45:20.908570 | orchestrator | 2026-03-08 00:45:20 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:45:20.910587 | orchestrator | 2026-03-08 00:45:20 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:45:20.911584 | orchestrator | 2026-03-08 00:45:20 | INFO  | Task 27a3a4fc-818e-4ce7-9365-7aaa3a2fa6b5 is in state STARTED 2026-03-08 00:45:20.912956 | orchestrator | 2026-03-08 00:45:20 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:45:20.913294 | orchestrator | 2026-03-08 00:45:20 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:23.986337 | orchestrator | 2026-03-08 00:45:23 | INFO  | Task e8407255-b5d5-4625-bebd-6d13bba3841c is in state STARTED 2026-03-08 00:45:23.988988 | orchestrator | 2026-03-08 00:45:23 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:45:23.991684 | orchestrator | 2026-03-08 00:45:23 | INFO  | Task 90939db2-d255-49e2-87ea-6fb3d1fbdb49 is in state STARTED 2026-03-08 00:45:23.992930 | orchestrator | 2026-03-08 00:45:23 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:45:23.999254 | orchestrator | 2026-03-08 00:45:24 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:45:24.001410 | orchestrator | 2026-03-08 00:45:24 | INFO  | Task 27a3a4fc-818e-4ce7-9365-7aaa3a2fa6b5 is in state STARTED 2026-03-08 00:45:24.003499 | orchestrator | 2026-03-08 00:45:24 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:45:24.004576 | orchestrator | 2026-03-08 00:45:24 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:27.161889 | orchestrator | 2026-03-08 00:45:27 | INFO  | Task e8407255-b5d5-4625-bebd-6d13bba3841c is in state STARTED 2026-03-08 00:45:27.161960 | orchestrator | 2026-03-08 00:45:27 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:45:27.161967 | orchestrator | 2026-03-08 00:45:27 | INFO  | Task 90939db2-d255-49e2-87ea-6fb3d1fbdb49 is in state STARTED 2026-03-08 00:45:27.161973 | orchestrator | 2026-03-08 00:45:27 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:45:27.161978 | orchestrator | 2026-03-08 00:45:27 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:45:27.162007 | orchestrator | 2026-03-08 00:45:27 | INFO  | Task 27a3a4fc-818e-4ce7-9365-7aaa3a2fa6b5 is in state STARTED 2026-03-08 00:45:27.162052 | orchestrator | 2026-03-08 00:45:27 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:45:27.162059 | orchestrator | 2026-03-08 00:45:27 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:30.181876 | orchestrator | 2026-03-08 00:45:30 | INFO  | Task e8407255-b5d5-4625-bebd-6d13bba3841c is in state STARTED 2026-03-08 00:45:30.183917 | orchestrator | 2026-03-08 00:45:30 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:45:30.187897 | orchestrator | 2026-03-08 00:45:30 | INFO  | Task 90939db2-d255-49e2-87ea-6fb3d1fbdb49 is in state STARTED 2026-03-08 00:45:30.188749 | orchestrator | 2026-03-08 00:45:30 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:45:30.189773 | orchestrator | 2026-03-08 00:45:30 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:45:30.190400 | orchestrator | 2026-03-08 00:45:30 | INFO  | Task 27a3a4fc-818e-4ce7-9365-7aaa3a2fa6b5 is in state STARTED 2026-03-08 00:45:30.191627 | orchestrator | 2026-03-08 00:45:30 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:45:30.194622 | orchestrator | 2026-03-08 00:45:30 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:33.350958 | orchestrator | 2026-03-08 00:45:33 | INFO  | Task e8407255-b5d5-4625-bebd-6d13bba3841c is in state STARTED 2026-03-08 00:45:33.351045 | orchestrator | 2026-03-08 00:45:33 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:45:33.351058 | orchestrator | 2026-03-08 00:45:33 | INFO  | Task 90939db2-d255-49e2-87ea-6fb3d1fbdb49 is in state STARTED 2026-03-08 00:45:33.351068 | orchestrator | 2026-03-08 00:45:33 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:45:33.351078 | orchestrator | 2026-03-08 00:45:33 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:45:33.351088 | orchestrator | 2026-03-08 00:45:33 | INFO  | Task 27a3a4fc-818e-4ce7-9365-7aaa3a2fa6b5 is in state STARTED 2026-03-08 00:45:33.351164 | orchestrator | 2026-03-08 00:45:33 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:45:33.351192 | orchestrator | 2026-03-08 00:45:33 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:36.340491 | orchestrator | 2026-03-08 00:45:36 | INFO  | Task e8407255-b5d5-4625-bebd-6d13bba3841c is in state SUCCESS 2026-03-08 00:45:36.340582 | orchestrator | 2026-03-08 00:45:36 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:45:36.343556 | orchestrator | 2026-03-08 00:45:36 | INFO  | Task 90939db2-d255-49e2-87ea-6fb3d1fbdb49 is in state STARTED 2026-03-08 00:45:36.343604 | orchestrator | 2026-03-08 00:45:36 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:45:36.343614 | orchestrator | 2026-03-08 00:45:36 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:45:36.344856 | orchestrator | 2026-03-08 00:45:36 | INFO  | Task 27a3a4fc-818e-4ce7-9365-7aaa3a2fa6b5 is in state STARTED 2026-03-08 00:45:36.344893 | orchestrator | 2026-03-08 00:45:36 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:45:36.344901 | orchestrator | 2026-03-08 00:45:36 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:39.410707 | orchestrator | 2026-03-08 00:45:39 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:45:39.411748 | orchestrator | 2026-03-08 00:45:39 | INFO  | Task 90939db2-d255-49e2-87ea-6fb3d1fbdb49 is in state STARTED 2026-03-08 00:45:39.435394 | orchestrator | 2026-03-08 00:45:39 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:45:39.440019 | orchestrator | 2026-03-08 00:45:39 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:45:39.444517 | orchestrator | 2026-03-08 00:45:39 | INFO  | Task 27a3a4fc-818e-4ce7-9365-7aaa3a2fa6b5 is in state STARTED 2026-03-08 00:45:39.449430 | orchestrator | 2026-03-08 00:45:39 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:45:39.449476 | orchestrator | 2026-03-08 00:45:39 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:42.556183 | orchestrator | 2026-03-08 00:45:42 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:45:42.556287 | orchestrator | 2026-03-08 00:45:42 | INFO  | Task 90939db2-d255-49e2-87ea-6fb3d1fbdb49 is in state STARTED 2026-03-08 00:45:42.556301 | orchestrator | 2026-03-08 00:45:42 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:45:42.556312 | orchestrator | 2026-03-08 00:45:42 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:45:42.556322 | orchestrator | 2026-03-08 00:45:42 | INFO  | Task 27a3a4fc-818e-4ce7-9365-7aaa3a2fa6b5 is in state STARTED 2026-03-08 00:45:42.556331 | orchestrator | 2026-03-08 00:45:42 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:45:42.556342 | orchestrator | 2026-03-08 00:45:42 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:45.798371 | orchestrator | 2026-03-08 00:45:45 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:45:45.798539 | orchestrator | 2026-03-08 00:45:45 | INFO  | Task 90939db2-d255-49e2-87ea-6fb3d1fbdb49 is in state SUCCESS 2026-03-08 00:45:45.798556 | orchestrator | 2026-03-08 00:45:45 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:45:45.798575 | orchestrator | 2026-03-08 00:45:45 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:45:45.798595 | orchestrator | 2026-03-08 00:45:45 | INFO  | Task 27a3a4fc-818e-4ce7-9365-7aaa3a2fa6b5 is in state STARTED 2026-03-08 00:45:45.799783 | orchestrator | 2026-03-08 00:45:45 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:45:45.799831 | orchestrator | 2026-03-08 00:45:45 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:48.844599 | orchestrator | 2026-03-08 00:45:48 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:45:48.844703 | orchestrator | 2026-03-08 00:45:48 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:45:48.847899 | orchestrator | 2026-03-08 00:45:48 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:45:48.848887 | orchestrator | 2026-03-08 00:45:48 | INFO  | Task 27a3a4fc-818e-4ce7-9365-7aaa3a2fa6b5 is in state STARTED 2026-03-08 00:45:48.851277 | orchestrator | 2026-03-08 00:45:48 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:45:48.853274 | orchestrator | 2026-03-08 00:45:48 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:51.899001 | orchestrator | 2026-03-08 00:45:51 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:45:51.899069 | orchestrator | 2026-03-08 00:45:51 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:45:51.899108 | orchestrator | 2026-03-08 00:45:51 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:45:51.900024 | orchestrator | 2026-03-08 00:45:51 | INFO  | Task 27a3a4fc-818e-4ce7-9365-7aaa3a2fa6b5 is in state STARTED 2026-03-08 00:45:51.900153 | orchestrator | 2026-03-08 00:45:51 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:45:51.900164 | orchestrator | 2026-03-08 00:45:51 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:54.974212 | orchestrator | 2026-03-08 00:45:54 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:45:54.976595 | orchestrator | 2026-03-08 00:45:54 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:45:54.978462 | orchestrator | 2026-03-08 00:45:54 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:45:54.980380 | orchestrator | 2026-03-08 00:45:54 | INFO  | Task 27a3a4fc-818e-4ce7-9365-7aaa3a2fa6b5 is in state STARTED 2026-03-08 00:45:54.983386 | orchestrator | 2026-03-08 00:45:54 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:45:54.985050 | orchestrator | 2026-03-08 00:45:54 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:58.192251 | orchestrator | 2026-03-08 00:45:58 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:45:58.192352 | orchestrator | 2026-03-08 00:45:58 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:45:58.192368 | orchestrator | 2026-03-08 00:45:58 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:45:58.192381 | orchestrator | 2026-03-08 00:45:58 | INFO  | Task 27a3a4fc-818e-4ce7-9365-7aaa3a2fa6b5 is in state STARTED 2026-03-08 00:45:58.196446 | orchestrator | 2026-03-08 00:45:58 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:45:58.196517 | orchestrator | 2026-03-08 00:45:58 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:01.269963 | orchestrator | 2026-03-08 00:46:01 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:46:01.271708 | orchestrator | 2026-03-08 00:46:01 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:46:01.272988 | orchestrator | 2026-03-08 00:46:01 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:46:01.277585 | orchestrator | 2026-03-08 00:46:01 | INFO  | Task 27a3a4fc-818e-4ce7-9365-7aaa3a2fa6b5 is in state STARTED 2026-03-08 00:46:01.279202 | orchestrator | 2026-03-08 00:46:01 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:46:01.279267 | orchestrator | 2026-03-08 00:46:01 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:04.393683 | orchestrator | 2026-03-08 00:46:04 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:46:04.394845 | orchestrator | 2026-03-08 00:46:04 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:46:04.394921 | orchestrator | 2026-03-08 00:46:04 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:46:04.395603 | orchestrator | 2026-03-08 00:46:04 | INFO  | Task 27a3a4fc-818e-4ce7-9365-7aaa3a2fa6b5 is in state STARTED 2026-03-08 00:46:04.396128 | orchestrator | 2026-03-08 00:46:04 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:46:04.396174 | orchestrator | 2026-03-08 00:46:04 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:07.451499 | orchestrator | 2026-03-08 00:46:07 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:46:07.451726 | orchestrator | 2026-03-08 00:46:07 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:46:07.452757 | orchestrator | 2026-03-08 00:46:07 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:46:07.453476 | orchestrator | 2026-03-08 00:46:07 | INFO  | Task 27a3a4fc-818e-4ce7-9365-7aaa3a2fa6b5 is in state STARTED 2026-03-08 00:46:07.455026 | orchestrator | 2026-03-08 00:46:07 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:46:07.455120 | orchestrator | 2026-03-08 00:46:07 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:10.500698 | orchestrator | 2026-03-08 00:46:10 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:46:10.501143 | orchestrator | 2026-03-08 00:46:10 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:46:10.502384 | orchestrator | 2026-03-08 00:46:10 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:46:10.504523 | orchestrator | 2026-03-08 00:46:10 | INFO  | Task 27a3a4fc-818e-4ce7-9365-7aaa3a2fa6b5 is in state STARTED 2026-03-08 00:46:10.504612 | orchestrator | 2026-03-08 00:46:10 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:46:10.504638 | orchestrator | 2026-03-08 00:46:10 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:13.568166 | orchestrator | 2026-03-08 00:46:13 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:46:13.568263 | orchestrator | 2026-03-08 00:46:13 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:46:13.568273 | orchestrator | 2026-03-08 00:46:13 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:46:13.571723 | orchestrator | 2026-03-08 00:46:13 | INFO  | Task 27a3a4fc-818e-4ce7-9365-7aaa3a2fa6b5 is in state STARTED 2026-03-08 00:46:13.575254 | orchestrator | 2026-03-08 00:46:13 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:46:13.575326 | orchestrator | 2026-03-08 00:46:13 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:16.621504 | orchestrator | 2026-03-08 00:46:16 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:46:16.625199 | orchestrator | 2026-03-08 00:46:16 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:46:16.626298 | orchestrator | 2026-03-08 00:46:16 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:46:16.636510 | orchestrator | 2026-03-08 00:46:16 | INFO  | Task 27a3a4fc-818e-4ce7-9365-7aaa3a2fa6b5 is in state STARTED 2026-03-08 00:46:16.636599 | orchestrator | 2026-03-08 00:46:16 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:46:16.640087 | orchestrator | 2026-03-08 00:46:16 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:19.687366 | orchestrator | 2026-03-08 00:46:19 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:46:19.688868 | orchestrator | 2026-03-08 00:46:19 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:46:19.691283 | orchestrator | 2026-03-08 00:46:19 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:46:19.691860 | orchestrator | 2026-03-08 00:46:19.691907 | orchestrator | 2026-03-08 00:46:19.691922 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-08 00:46:19.691934 | orchestrator | 2026-03-08 00:46:19.691945 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-08 00:46:19.691984 | orchestrator | Sunday 08 March 2026 00:44:55 +0000 (0:00:00.751) 0:00:00.751 ********** 2026-03-08 00:46:19.691996 | orchestrator | ok: [testbed-manager] => { 2026-03-08 00:46:19.692009 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-08 00:46:19.692021 | orchestrator | } 2026-03-08 00:46:19.692033 | orchestrator | 2026-03-08 00:46:19.692044 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-08 00:46:19.692095 | orchestrator | Sunday 08 March 2026 00:44:55 +0000 (0:00:00.500) 0:00:01.251 ********** 2026-03-08 00:46:19.692106 | orchestrator | ok: [testbed-manager] 2026-03-08 00:46:19.692118 | orchestrator | 2026-03-08 00:46:19.692129 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-08 00:46:19.692139 | orchestrator | Sunday 08 March 2026 00:44:57 +0000 (0:00:01.439) 0:00:02.691 ********** 2026-03-08 00:46:19.692151 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-08 00:46:19.692162 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-08 00:46:19.692172 | orchestrator | 2026-03-08 00:46:19.692183 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-08 00:46:19.692194 | orchestrator | Sunday 08 March 2026 00:44:58 +0000 (0:00:01.559) 0:00:04.250 ********** 2026-03-08 00:46:19.692254 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:19.692267 | orchestrator | 2026-03-08 00:46:19.692278 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-08 00:46:19.692288 | orchestrator | Sunday 08 March 2026 00:45:02 +0000 (0:00:04.074) 0:00:08.325 ********** 2026-03-08 00:46:19.692299 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:19.692310 | orchestrator | 2026-03-08 00:46:19.692320 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-08 00:46:19.692331 | orchestrator | Sunday 08 March 2026 00:45:04 +0000 (0:00:01.743) 0:00:10.069 ********** 2026-03-08 00:46:19.692346 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-08 00:46:19.692357 | orchestrator | ok: [testbed-manager] 2026-03-08 00:46:19.692368 | orchestrator | 2026-03-08 00:46:19.692379 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-08 00:46:19.692389 | orchestrator | Sunday 08 March 2026 00:45:30 +0000 (0:00:26.106) 0:00:36.176 ********** 2026-03-08 00:46:19.692400 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:19.692411 | orchestrator | 2026-03-08 00:46:19.692422 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:46:19.692433 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:46:19.692445 | orchestrator | 2026-03-08 00:46:19.692456 | orchestrator | 2026-03-08 00:46:19.692467 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:46:19.692478 | orchestrator | Sunday 08 March 2026 00:45:34 +0000 (0:00:04.213) 0:00:40.389 ********** 2026-03-08 00:46:19.692489 | orchestrator | =============================================================================== 2026-03-08 00:46:19.692500 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.11s 2026-03-08 00:46:19.692510 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 4.21s 2026-03-08 00:46:19.692521 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 4.07s 2026-03-08 00:46:19.692531 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.74s 2026-03-08 00:46:19.692542 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.56s 2026-03-08 00:46:19.692553 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.44s 2026-03-08 00:46:19.692563 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.50s 2026-03-08 00:46:19.692574 | orchestrator | 2026-03-08 00:46:19.692593 | orchestrator | 2026-03-08 00:46:19.692604 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-08 00:46:19.692615 | orchestrator | 2026-03-08 00:46:19.692625 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-08 00:46:19.692636 | orchestrator | Sunday 08 March 2026 00:44:53 +0000 (0:00:00.764) 0:00:00.764 ********** 2026-03-08 00:46:19.692647 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-08 00:46:19.692659 | orchestrator | 2026-03-08 00:46:19.692670 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-08 00:46:19.692680 | orchestrator | Sunday 08 March 2026 00:44:54 +0000 (0:00:00.747) 0:00:01.511 ********** 2026-03-08 00:46:19.692691 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-08 00:46:19.692702 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-08 00:46:19.692712 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-08 00:46:19.692723 | orchestrator | 2026-03-08 00:46:19.692734 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-08 00:46:19.692744 | orchestrator | Sunday 08 March 2026 00:44:56 +0000 (0:00:02.370) 0:00:03.882 ********** 2026-03-08 00:46:19.692755 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:19.692765 | orchestrator | 2026-03-08 00:46:19.692776 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-08 00:46:19.692787 | orchestrator | Sunday 08 March 2026 00:44:58 +0000 (0:00:01.796) 0:00:05.678 ********** 2026-03-08 00:46:19.692814 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-08 00:46:19.692825 | orchestrator | ok: [testbed-manager] 2026-03-08 00:46:19.692836 | orchestrator | 2026-03-08 00:46:19.692847 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-08 00:46:19.692857 | orchestrator | Sunday 08 March 2026 00:45:35 +0000 (0:00:36.606) 0:00:42.285 ********** 2026-03-08 00:46:19.692868 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:19.692879 | orchestrator | 2026-03-08 00:46:19.692890 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-08 00:46:19.692900 | orchestrator | Sunday 08 March 2026 00:45:37 +0000 (0:00:02.045) 0:00:44.331 ********** 2026-03-08 00:46:19.692911 | orchestrator | ok: [testbed-manager] 2026-03-08 00:46:19.692921 | orchestrator | 2026-03-08 00:46:19.692932 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-08 00:46:19.692942 | orchestrator | Sunday 08 March 2026 00:45:38 +0000 (0:00:00.955) 0:00:45.286 ********** 2026-03-08 00:46:19.692953 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:19.692964 | orchestrator | 2026-03-08 00:46:19.692974 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-08 00:46:19.692985 | orchestrator | Sunday 08 March 2026 00:45:40 +0000 (0:00:02.460) 0:00:47.747 ********** 2026-03-08 00:46:19.692995 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:19.693006 | orchestrator | 2026-03-08 00:46:19.693017 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-08 00:46:19.693027 | orchestrator | Sunday 08 March 2026 00:45:41 +0000 (0:00:00.809) 0:00:48.557 ********** 2026-03-08 00:46:19.693038 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:19.693086 | orchestrator | 2026-03-08 00:46:19.693107 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-08 00:46:19.693126 | orchestrator | Sunday 08 March 2026 00:45:42 +0000 (0:00:00.731) 0:00:49.288 ********** 2026-03-08 00:46:19.693144 | orchestrator | ok: [testbed-manager] 2026-03-08 00:46:19.693156 | orchestrator | 2026-03-08 00:46:19.693167 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:46:19.693178 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:46:19.693217 | orchestrator | 2026-03-08 00:46:19.693239 | orchestrator | 2026-03-08 00:46:19.693256 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:46:19.693267 | orchestrator | Sunday 08 March 2026 00:45:43 +0000 (0:00:01.073) 0:00:50.362 ********** 2026-03-08 00:46:19.693283 | orchestrator | =============================================================================== 2026-03-08 00:46:19.693302 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.61s 2026-03-08 00:46:19.693321 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.46s 2026-03-08 00:46:19.693377 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.37s 2026-03-08 00:46:19.693397 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.05s 2026-03-08 00:46:19.693416 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.80s 2026-03-08 00:46:19.693427 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.07s 2026-03-08 00:46:19.693437 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.96s 2026-03-08 00:46:19.693448 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.81s 2026-03-08 00:46:19.693459 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.75s 2026-03-08 00:46:19.693469 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.73s 2026-03-08 00:46:19.693480 | orchestrator | 2026-03-08 00:46:19.693490 | orchestrator | 2026-03-08 00:46:19.693501 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-08 00:46:19.693512 | orchestrator | 2026-03-08 00:46:19.693522 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-08 00:46:19.693533 | orchestrator | Sunday 08 March 2026 00:45:13 +0000 (0:00:00.243) 0:00:00.243 ********** 2026-03-08 00:46:19.693544 | orchestrator | ok: [testbed-manager] 2026-03-08 00:46:19.693555 | orchestrator | 2026-03-08 00:46:19.693565 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-08 00:46:19.693576 | orchestrator | Sunday 08 March 2026 00:45:14 +0000 (0:00:01.020) 0:00:01.263 ********** 2026-03-08 00:46:19.693586 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-08 00:46:19.693597 | orchestrator | 2026-03-08 00:46:19.693607 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-08 00:46:19.693618 | orchestrator | Sunday 08 March 2026 00:45:15 +0000 (0:00:00.606) 0:00:01.870 ********** 2026-03-08 00:46:19.693629 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:19.693640 | orchestrator | 2026-03-08 00:46:19.693650 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-08 00:46:19.693661 | orchestrator | Sunday 08 March 2026 00:45:16 +0000 (0:00:01.232) 0:00:03.103 ********** 2026-03-08 00:46:19.693671 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-08 00:46:19.693690 | orchestrator | ok: [testbed-manager] 2026-03-08 00:46:19.693710 | orchestrator | 2026-03-08 00:46:19.693730 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-08 00:46:19.693748 | orchestrator | Sunday 08 March 2026 00:46:14 +0000 (0:00:57.969) 0:01:01.072 ********** 2026-03-08 00:46:19.693765 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:19.693776 | orchestrator | 2026-03-08 00:46:19.693787 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:46:19.693797 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:46:19.693808 | orchestrator | 2026-03-08 00:46:19.693819 | orchestrator | 2026-03-08 00:46:19.693829 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:46:19.693851 | orchestrator | Sunday 08 March 2026 00:46:18 +0000 (0:00:03.869) 0:01:04.941 ********** 2026-03-08 00:46:19.693863 | orchestrator | =============================================================================== 2026-03-08 00:46:19.693882 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 57.97s 2026-03-08 00:46:19.693900 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.87s 2026-03-08 00:46:19.693917 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.23s 2026-03-08 00:46:19.693933 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.02s 2026-03-08 00:46:19.693951 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.61s 2026-03-08 00:46:19.693970 | orchestrator | 2026-03-08 00:46:19 | INFO  | Task 27a3a4fc-818e-4ce7-9365-7aaa3a2fa6b5 is in state SUCCESS 2026-03-08 00:46:19.694176 | orchestrator | 2026-03-08 00:46:19 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:46:19.694666 | orchestrator | 2026-03-08 00:46:19 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:22.734690 | orchestrator | 2026-03-08 00:46:22 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:46:22.735186 | orchestrator | 2026-03-08 00:46:22 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:46:22.736487 | orchestrator | 2026-03-08 00:46:22 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:46:22.738638 | orchestrator | 2026-03-08 00:46:22 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:46:22.740827 | orchestrator | 2026-03-08 00:46:22 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:25.782271 | orchestrator | 2026-03-08 00:46:25 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:46:25.786667 | orchestrator | 2026-03-08 00:46:25 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:46:25.788509 | orchestrator | 2026-03-08 00:46:25 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:46:25.789870 | orchestrator | 2026-03-08 00:46:25 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:46:25.790768 | orchestrator | 2026-03-08 00:46:25 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:28.848735 | orchestrator | 2026-03-08 00:46:28 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:46:28.850703 | orchestrator | 2026-03-08 00:46:28 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:46:28.853334 | orchestrator | 2026-03-08 00:46:28 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:46:28.856392 | orchestrator | 2026-03-08 00:46:28 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:46:28.856441 | orchestrator | 2026-03-08 00:46:28 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:31.927179 | orchestrator | 2026-03-08 00:46:31 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:46:31.930487 | orchestrator | 2026-03-08 00:46:31 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state STARTED 2026-03-08 00:46:31.932087 | orchestrator | 2026-03-08 00:46:31 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:46:31.933440 | orchestrator | 2026-03-08 00:46:31 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:46:31.933794 | orchestrator | 2026-03-08 00:46:31 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:35.045181 | orchestrator | 2026-03-08 00:46:35 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:46:35.047788 | orchestrator | 2026-03-08 00:46:35 | INFO  | Task 6b2aa22c-55a1-444d-a2f5-61904072ea9c is in state SUCCESS 2026-03-08 00:46:35.048420 | orchestrator | 2026-03-08 00:46:35.048473 | orchestrator | 2026-03-08 00:46:35.048490 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 00:46:35.048505 | orchestrator | 2026-03-08 00:46:35.048518 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 00:46:35.048532 | orchestrator | Sunday 08 March 2026 00:44:53 +0000 (0:00:00.650) 0:00:00.650 ********** 2026-03-08 00:46:35.048545 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-08 00:46:35.048558 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-08 00:46:35.048571 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-08 00:46:35.048582 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-08 00:46:35.048594 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-08 00:46:35.048607 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-08 00:46:35.048621 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-08 00:46:35.048633 | orchestrator | 2026-03-08 00:46:35.048646 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-08 00:46:35.048660 | orchestrator | 2026-03-08 00:46:35.048674 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-08 00:46:35.048688 | orchestrator | Sunday 08 March 2026 00:44:56 +0000 (0:00:02.821) 0:00:03.472 ********** 2026-03-08 00:46:35.048719 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:46:35.048740 | orchestrator | 2026-03-08 00:46:35.048752 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-08 00:46:35.048760 | orchestrator | Sunday 08 March 2026 00:44:58 +0000 (0:00:01.502) 0:00:04.974 ********** 2026-03-08 00:46:35.048768 | orchestrator | ok: [testbed-manager] 2026-03-08 00:46:35.048776 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:46:35.048784 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:46:35.048792 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:46:35.048800 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:46:35.048808 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:46:35.048816 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:46:35.048823 | orchestrator | 2026-03-08 00:46:35.048831 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-08 00:46:35.048839 | orchestrator | Sunday 08 March 2026 00:45:00 +0000 (0:00:02.323) 0:00:07.298 ********** 2026-03-08 00:46:35.048847 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:46:35.048855 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:46:35.048863 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:46:35.048871 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:46:35.048879 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:46:35.048887 | orchestrator | ok: [testbed-manager] 2026-03-08 00:46:35.048894 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:46:35.048902 | orchestrator | 2026-03-08 00:46:35.048910 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-08 00:46:35.048918 | orchestrator | Sunday 08 March 2026 00:45:03 +0000 (0:00:03.316) 0:00:10.615 ********** 2026-03-08 00:46:35.048926 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:35.048941 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:46:35.048949 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:46:35.048957 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:46:35.048964 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:46:35.048972 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:46:35.048980 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:46:35.048988 | orchestrator | 2026-03-08 00:46:35.048996 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-08 00:46:35.049005 | orchestrator | Sunday 08 March 2026 00:45:07 +0000 (0:00:03.299) 0:00:13.914 ********** 2026-03-08 00:46:35.049028 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:46:35.049061 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:46:35.049071 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:46:35.049080 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:46:35.049090 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:46:35.049099 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:46:35.049108 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:35.049118 | orchestrator | 2026-03-08 00:46:35.049127 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-08 00:46:35.049137 | orchestrator | Sunday 08 March 2026 00:45:17 +0000 (0:00:10.721) 0:00:24.636 ********** 2026-03-08 00:46:35.049147 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:46:35.049156 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:46:35.049165 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:46:35.049174 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:46:35.049183 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:46:35.049194 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:46:35.049203 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:35.049213 | orchestrator | 2026-03-08 00:46:35.049223 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-08 00:46:35.049231 | orchestrator | Sunday 08 March 2026 00:45:59 +0000 (0:00:41.316) 0:01:05.952 ********** 2026-03-08 00:46:35.049239 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:46:35.049249 | orchestrator | 2026-03-08 00:46:35.049257 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-08 00:46:35.049264 | orchestrator | Sunday 08 March 2026 00:46:00 +0000 (0:00:01.538) 0:01:07.491 ********** 2026-03-08 00:46:35.049272 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-08 00:46:35.049281 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-08 00:46:35.049289 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-08 00:46:35.049296 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-08 00:46:35.049319 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-08 00:46:35.049327 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-08 00:46:35.049335 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-08 00:46:35.049343 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-08 00:46:35.049351 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-08 00:46:35.049359 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-08 00:46:35.049366 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-08 00:46:35.049374 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-08 00:46:35.049382 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-08 00:46:35.049390 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-08 00:46:35.049398 | orchestrator | 2026-03-08 00:46:35.049406 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-08 00:46:35.049415 | orchestrator | Sunday 08 March 2026 00:46:06 +0000 (0:00:06.067) 0:01:13.558 ********** 2026-03-08 00:46:35.049422 | orchestrator | ok: [testbed-manager] 2026-03-08 00:46:35.049430 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:46:35.049438 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:46:35.049446 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:46:35.049453 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:46:35.049461 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:46:35.049469 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:46:35.049487 | orchestrator | 2026-03-08 00:46:35.049503 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-08 00:46:35.049511 | orchestrator | Sunday 08 March 2026 00:46:08 +0000 (0:00:01.322) 0:01:14.881 ********** 2026-03-08 00:46:35.049524 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:35.049533 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:46:35.049541 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:46:35.049548 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:46:35.049556 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:46:35.049564 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:46:35.049572 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:46:35.049580 | orchestrator | 2026-03-08 00:46:35.049588 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-08 00:46:35.049596 | orchestrator | Sunday 08 March 2026 00:46:10 +0000 (0:00:02.220) 0:01:17.102 ********** 2026-03-08 00:46:35.049604 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:46:35.049612 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:46:35.049620 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:46:35.049628 | orchestrator | ok: [testbed-manager] 2026-03-08 00:46:35.049636 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:46:35.049644 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:46:35.049658 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:46:35.049672 | orchestrator | 2026-03-08 00:46:35.049687 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-08 00:46:35.049702 | orchestrator | Sunday 08 March 2026 00:46:12 +0000 (0:00:02.005) 0:01:19.107 ********** 2026-03-08 00:46:35.049717 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:46:35.049733 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:46:35.049749 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:46:35.049764 | orchestrator | ok: [testbed-manager] 2026-03-08 00:46:35.049779 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:46:35.049789 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:46:35.049797 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:46:35.049805 | orchestrator | 2026-03-08 00:46:35.049817 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-08 00:46:35.049826 | orchestrator | Sunday 08 March 2026 00:46:15 +0000 (0:00:02.991) 0:01:22.099 ********** 2026-03-08 00:46:35.049834 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-08 00:46:35.049843 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:46:35.049852 | orchestrator | 2026-03-08 00:46:35.049859 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-08 00:46:35.049867 | orchestrator | Sunday 08 March 2026 00:46:17 +0000 (0:00:01.718) 0:01:23.818 ********** 2026-03-08 00:46:35.049875 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:35.049883 | orchestrator | 2026-03-08 00:46:35.049891 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-08 00:46:35.049899 | orchestrator | Sunday 08 March 2026 00:46:19 +0000 (0:00:02.684) 0:01:26.502 ********** 2026-03-08 00:46:35.049907 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:46:35.049915 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:46:35.049922 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:46:35.049930 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:46:35.049938 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:46:35.049946 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:46:35.049954 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:35.049961 | orchestrator | 2026-03-08 00:46:35.049969 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:46:35.049977 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:46:35.049986 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:46:35.050007 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:46:35.050094 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:46:35.050115 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:46:35.050124 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:46:35.050132 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:46:35.050139 | orchestrator | 2026-03-08 00:46:35.050148 | orchestrator | 2026-03-08 00:46:35.050155 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:46:35.050163 | orchestrator | Sunday 08 March 2026 00:46:31 +0000 (0:00:11.638) 0:01:38.140 ********** 2026-03-08 00:46:35.050180 | orchestrator | =============================================================================== 2026-03-08 00:46:35.050196 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 41.32s 2026-03-08 00:46:35.050204 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.64s 2026-03-08 00:46:35.050212 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.72s 2026-03-08 00:46:35.050220 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.07s 2026-03-08 00:46:35.050228 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.32s 2026-03-08 00:46:35.050235 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.30s 2026-03-08 00:46:35.050243 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.99s 2026-03-08 00:46:35.050251 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.82s 2026-03-08 00:46:35.050259 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.68s 2026-03-08 00:46:35.050267 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.32s 2026-03-08 00:46:35.050275 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.22s 2026-03-08 00:46:35.050283 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.01s 2026-03-08 00:46:35.050291 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.72s 2026-03-08 00:46:35.050299 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.54s 2026-03-08 00:46:35.050306 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.50s 2026-03-08 00:46:35.050314 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.32s 2026-03-08 00:46:35.050851 | orchestrator | 2026-03-08 00:46:35 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:46:35.053626 | orchestrator | 2026-03-08 00:46:35 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:46:35.053683 | orchestrator | 2026-03-08 00:46:35 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:38.106365 | orchestrator | 2026-03-08 00:46:38 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:46:38.108822 | orchestrator | 2026-03-08 00:46:38 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:46:38.113505 | orchestrator | 2026-03-08 00:46:38 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:46:38.113594 | orchestrator | 2026-03-08 00:46:38 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:41.176537 | orchestrator | 2026-03-08 00:46:41 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:46:41.183203 | orchestrator | 2026-03-08 00:46:41 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:46:41.189078 | orchestrator | 2026-03-08 00:46:41 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:46:41.189133 | orchestrator | 2026-03-08 00:46:41 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:44.283848 | orchestrator | 2026-03-08 00:46:44 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:46:44.286745 | orchestrator | 2026-03-08 00:46:44 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:46:44.290261 | orchestrator | 2026-03-08 00:46:44 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:46:44.290316 | orchestrator | 2026-03-08 00:46:44 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:47.324106 | orchestrator | 2026-03-08 00:46:47 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:46:47.325718 | orchestrator | 2026-03-08 00:46:47 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:46:47.327902 | orchestrator | 2026-03-08 00:46:47 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:46:47.327958 | orchestrator | 2026-03-08 00:46:47 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:50.364981 | orchestrator | 2026-03-08 00:46:50 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:46:50.366632 | orchestrator | 2026-03-08 00:46:50 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:46:50.367779 | orchestrator | 2026-03-08 00:46:50 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:46:50.367867 | orchestrator | 2026-03-08 00:46:50 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:53.408707 | orchestrator | 2026-03-08 00:46:53 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:46:53.408757 | orchestrator | 2026-03-08 00:46:53 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:46:53.412695 | orchestrator | 2026-03-08 00:46:53 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:46:53.414087 | orchestrator | 2026-03-08 00:46:53 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:56.462948 | orchestrator | 2026-03-08 00:46:56 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:46:56.465388 | orchestrator | 2026-03-08 00:46:56 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:46:56.466779 | orchestrator | 2026-03-08 00:46:56 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:46:56.468507 | orchestrator | 2026-03-08 00:46:56 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:59.506517 | orchestrator | 2026-03-08 00:46:59 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:46:59.506985 | orchestrator | 2026-03-08 00:46:59 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:46:59.509429 | orchestrator | 2026-03-08 00:46:59 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:46:59.510446 | orchestrator | 2026-03-08 00:46:59 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:02.557759 | orchestrator | 2026-03-08 00:47:02 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:47:02.559687 | orchestrator | 2026-03-08 00:47:02 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:47:02.560811 | orchestrator | 2026-03-08 00:47:02 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:47:02.560850 | orchestrator | 2026-03-08 00:47:02 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:05.610117 | orchestrator | 2026-03-08 00:47:05 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:47:05.610736 | orchestrator | 2026-03-08 00:47:05 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:47:05.611537 | orchestrator | 2026-03-08 00:47:05 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:47:05.611658 | orchestrator | 2026-03-08 00:47:05 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:08.650521 | orchestrator | 2026-03-08 00:47:08 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:47:08.651540 | orchestrator | 2026-03-08 00:47:08 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:47:08.652876 | orchestrator | 2026-03-08 00:47:08 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state STARTED 2026-03-08 00:47:08.653593 | orchestrator | 2026-03-08 00:47:08 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:11.687573 | orchestrator | 2026-03-08 00:47:11 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:47:11.687990 | orchestrator | 2026-03-08 00:47:11 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:47:11.695375 | orchestrator | 2026-03-08 00:47:11.695440 | orchestrator | 2026-03-08 00:47:11 | INFO  | Task 24935b9e-d109-48f9-abb5-d26259bfe714 is in state SUCCESS 2026-03-08 00:47:11.696781 | orchestrator | 2026-03-08 00:47:11.696898 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-08 00:47:11.696914 | orchestrator | 2026-03-08 00:47:11.696923 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-08 00:47:11.696931 | orchestrator | Sunday 08 March 2026 00:44:46 +0000 (0:00:00.211) 0:00:00.211 ********** 2026-03-08 00:47:11.696939 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:47:11.696948 | orchestrator | 2026-03-08 00:47:11.696956 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-08 00:47:11.696963 | orchestrator | Sunday 08 March 2026 00:44:47 +0000 (0:00:01.161) 0:00:01.372 ********** 2026-03-08 00:47:11.696971 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-08 00:47:11.696980 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-08 00:47:11.696987 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-08 00:47:11.696996 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-08 00:47:11.697018 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-08 00:47:11.697025 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-08 00:47:11.697032 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-08 00:47:11.697040 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-08 00:47:11.697048 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-08 00:47:11.697055 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-08 00:47:11.697063 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-08 00:47:11.697088 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-08 00:47:11.697097 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-08 00:47:11.697105 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-08 00:47:11.697113 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-08 00:47:11.697121 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-08 00:47:11.697128 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-08 00:47:11.697136 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-08 00:47:11.697143 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-08 00:47:11.697151 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-08 00:47:11.697158 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-08 00:47:11.697166 | orchestrator | 2026-03-08 00:47:11.697173 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-08 00:47:11.697181 | orchestrator | Sunday 08 March 2026 00:44:51 +0000 (0:00:03.954) 0:00:05.327 ********** 2026-03-08 00:47:11.697203 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:47:11.697243 | orchestrator | 2026-03-08 00:47:11.697251 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-08 00:47:11.697259 | orchestrator | Sunday 08 March 2026 00:44:52 +0000 (0:00:01.227) 0:00:06.555 ********** 2026-03-08 00:47:11.697270 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:11.697281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:11.697301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:11.697311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:11.697328 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.697339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.697354 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:11.697362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.697371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.697384 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:11.697397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.697415 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:11.697425 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.697435 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.697446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.697456 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.697464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.697479 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.697493 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.697501 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.697509 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.697517 | orchestrator | 2026-03-08 00:47:11.697525 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-08 00:47:11.697532 | orchestrator | Sunday 08 March 2026 00:44:57 +0000 (0:00:05.407) 0:00:11.963 ********** 2026-03-08 00:47:11.697540 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-08 00:47:11.697551 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.697559 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.697567 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:47:11.697579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-08 00:47:11.697591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.697599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.697607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-08 00:47:11.697615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.697623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.697634 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:47:11.697642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-08 00:47:11.697650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.697666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.697673 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:47:11.697681 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:47:11.697689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-08 00:47:11.697697 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.697705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.697712 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:47:11.697720 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-08 00:47:11.697731 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.697739 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.697747 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:47:11.697766 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-08 00:47:11.697774 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.697782 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.697789 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:47:11.697797 | orchestrator | 2026-03-08 00:47:11.697805 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-08 00:47:11.697813 | orchestrator | Sunday 08 March 2026 00:44:59 +0000 (0:00:01.973) 0:00:13.936 ********** 2026-03-08 00:47:11.697821 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-08 00:47:11.697829 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.697837 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.697845 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:47:11.697854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-08 00:47:11.697874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.697883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.697891 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:47:11.697899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-08 00:47:11.697907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.697915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.697923 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:47:11.697931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-08 00:47:11.697942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.697954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.697963 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:47:11.698672 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-08 00:47:11.698732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.698742 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.698751 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:47:11.698765 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-08 00:47:11.698774 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.698817 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-08 00:47:11.698835 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.698850 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.698859 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.698867 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:47:11.698875 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:47:11.698883 | orchestrator | 2026-03-08 00:47:11.698892 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-08 00:47:11.698900 | orchestrator | Sunday 08 March 2026 00:45:02 +0000 (0:00:02.954) 0:00:16.890 ********** 2026-03-08 00:47:11.698908 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:47:11.698915 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:47:11.698922 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:47:11.698930 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:47:11.698937 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:47:11.698944 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:47:11.698952 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:47:11.698960 | orchestrator | 2026-03-08 00:47:11.698967 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-08 00:47:11.698975 | orchestrator | Sunday 08 March 2026 00:45:04 +0000 (0:00:01.892) 0:00:18.783 ********** 2026-03-08 00:47:11.698982 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:47:11.698990 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:47:11.698998 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:47:11.699072 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:47:11.699080 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:47:11.699088 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:47:11.699095 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:47:11.699103 | orchestrator | 2026-03-08 00:47:11.699110 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-08 00:47:11.699118 | orchestrator | Sunday 08 March 2026 00:45:06 +0000 (0:00:01.788) 0:00:20.572 ********** 2026-03-08 00:47:11.699126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:11.699141 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:11.699152 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:11.699162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:11.699177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:11.699186 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:11.699194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.699202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.699214 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.699225 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:11.699233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.699246 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.699254 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.699262 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.699270 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.699283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.699294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.699305 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.699316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.699328 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.699337 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.699348 | orchestrator | 2026-03-08 00:47:11.699360 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-08 00:47:11.699372 | orchestrator | Sunday 08 March 2026 00:45:14 +0000 (0:00:07.809) 0:00:28.381 ********** 2026-03-08 00:47:11.699384 | orchestrator | [WARNING]: Skipped 2026-03-08 00:47:11.699396 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-08 00:47:11.699408 | orchestrator | to this access issue: 2026-03-08 00:47:11.699418 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-08 00:47:11.699426 | orchestrator | directory 2026-03-08 00:47:11.699434 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-08 00:47:11.699442 | orchestrator | 2026-03-08 00:47:11.699449 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-08 00:47:11.699461 | orchestrator | Sunday 08 March 2026 00:45:15 +0000 (0:00:01.534) 0:00:29.915 ********** 2026-03-08 00:47:11.699469 | orchestrator | [WARNING]: Skipped 2026-03-08 00:47:11.699477 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-08 00:47:11.699484 | orchestrator | to this access issue: 2026-03-08 00:47:11.699492 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-08 00:47:11.699500 | orchestrator | directory 2026-03-08 00:47:11.699508 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-08 00:47:11.699515 | orchestrator | 2026-03-08 00:47:11.699523 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-08 00:47:11.699531 | orchestrator | Sunday 08 March 2026 00:45:16 +0000 (0:00:00.874) 0:00:30.790 ********** 2026-03-08 00:47:11.699539 | orchestrator | [WARNING]: Skipped 2026-03-08 00:47:11.699547 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-08 00:47:11.699554 | orchestrator | to this access issue: 2026-03-08 00:47:11.699562 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-08 00:47:11.699570 | orchestrator | directory 2026-03-08 00:47:11.699578 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-08 00:47:11.699585 | orchestrator | 2026-03-08 00:47:11.699593 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-08 00:47:11.699600 | orchestrator | Sunday 08 March 2026 00:45:17 +0000 (0:00:01.097) 0:00:31.887 ********** 2026-03-08 00:47:11.699608 | orchestrator | [WARNING]: Skipped 2026-03-08 00:47:11.699615 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-08 00:47:11.699623 | orchestrator | to this access issue: 2026-03-08 00:47:11.699631 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-08 00:47:11.699638 | orchestrator | directory 2026-03-08 00:47:11.699646 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-08 00:47:11.699654 | orchestrator | 2026-03-08 00:47:11.699661 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-08 00:47:11.699669 | orchestrator | Sunday 08 March 2026 00:45:18 +0000 (0:00:00.980) 0:00:32.868 ********** 2026-03-08 00:47:11.699677 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:47:11.699685 | orchestrator | changed: [testbed-manager] 2026-03-08 00:47:11.699692 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:47:11.699699 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:47:11.699707 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:47:11.699718 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:47:11.699725 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:47:11.699733 | orchestrator | 2026-03-08 00:47:11.699741 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-08 00:47:11.699749 | orchestrator | Sunday 08 March 2026 00:45:23 +0000 (0:00:04.209) 0:00:37.078 ********** 2026-03-08 00:47:11.699757 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-08 00:47:11.699765 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-08 00:47:11.699772 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-08 00:47:11.699780 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-08 00:47:11.699788 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-08 00:47:11.699795 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-08 00:47:11.699803 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-08 00:47:11.699810 | orchestrator | 2026-03-08 00:47:11.699818 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-08 00:47:11.699830 | orchestrator | Sunday 08 March 2026 00:45:27 +0000 (0:00:04.477) 0:00:41.555 ********** 2026-03-08 00:47:11.699838 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:47:11.699846 | orchestrator | changed: [testbed-manager] 2026-03-08 00:47:11.699854 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:47:11.699862 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:47:11.699873 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:47:11.699881 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:47:11.699889 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:47:11.699896 | orchestrator | 2026-03-08 00:47:11.699904 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-08 00:47:11.699911 | orchestrator | Sunday 08 March 2026 00:45:30 +0000 (0:00:03.198) 0:00:44.754 ********** 2026-03-08 00:47:11.699919 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:11.699927 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.699935 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:11.699943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.699955 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.699971 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.699984 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:11.699996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.700020 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:11.700028 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.700036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.700044 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.700051 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:11.700059 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.700081 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:11.700091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.700099 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.700107 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.700116 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:11.700125 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:11.700136 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.700152 | orchestrator | 2026-03-08 00:47:11.700160 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-08 00:47:11.700168 | orchestrator | Sunday 08 March 2026 00:45:34 +0000 (0:00:04.033) 0:00:48.788 ********** 2026-03-08 00:47:11.700176 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-08 00:47:11.700184 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-08 00:47:11.700192 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-08 00:47:11.700200 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-08 00:47:11.700207 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-08 00:47:11.700215 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-08 00:47:11.700223 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-08 00:47:11.700231 | orchestrator | 2026-03-08 00:47:11.700243 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-08 00:47:11.700251 | orchestrator | Sunday 08 March 2026 00:45:38 +0000 (0:00:04.156) 0:00:52.945 ********** 2026-03-08 00:47:11.700259 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-08 00:47:11.700266 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-08 00:47:11.700274 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-08 00:47:11.700282 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-08 00:47:11.700290 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-08 00:47:11.700298 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-08 00:47:11.700305 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-08 00:47:11.700313 | orchestrator | 2026-03-08 00:47:11.700321 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-08 00:47:11.700329 | orchestrator | Sunday 08 March 2026 00:45:41 +0000 (0:00:02.582) 0:00:55.527 ********** 2026-03-08 00:47:11.700337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:11.700345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:11.700354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:11.700368 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:11.700377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.700389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.700397 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:11.700405 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:11.700414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.700426 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:11.700436 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.700445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.700456 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.700465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.700473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.700481 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.700490 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.700501 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.700513 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.700521 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.700530 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:11.700537 | orchestrator | 2026-03-08 00:47:11.700546 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-08 00:47:11.700554 | orchestrator | Sunday 08 March 2026 00:45:45 +0000 (0:00:03.543) 0:00:59.070 ********** 2026-03-08 00:47:11.700565 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:47:11.700573 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:47:11.700580 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:47:11.700588 | orchestrator | changed: [testbed-manager] 2026-03-08 00:47:11.700595 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:47:11.700603 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:47:11.700611 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:47:11.700618 | orchestrator | 2026-03-08 00:47:11.700626 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-08 00:47:11.700634 | orchestrator | Sunday 08 March 2026 00:45:47 +0000 (0:00:01.939) 0:01:01.009 ********** 2026-03-08 00:47:11.700641 | orchestrator | changed: [testbed-manager] 2026-03-08 00:47:11.700649 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:47:11.700656 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:47:11.700663 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:47:11.700671 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:47:11.700678 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:47:11.700685 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:47:11.700693 | orchestrator | 2026-03-08 00:47:11.700700 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-08 00:47:11.700706 | orchestrator | Sunday 08 March 2026 00:45:48 +0000 (0:00:01.348) 0:01:02.358 ********** 2026-03-08 00:47:11.700713 | orchestrator | 2026-03-08 00:47:11.700722 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-08 00:47:11.700733 | orchestrator | Sunday 08 March 2026 00:45:48 +0000 (0:00:00.072) 0:01:02.431 ********** 2026-03-08 00:47:11.700740 | orchestrator | 2026-03-08 00:47:11.700747 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-08 00:47:11.700753 | orchestrator | Sunday 08 March 2026 00:45:48 +0000 (0:00:00.065) 0:01:02.496 ********** 2026-03-08 00:47:11.700760 | orchestrator | 2026-03-08 00:47:11.700767 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-08 00:47:11.700774 | orchestrator | Sunday 08 March 2026 00:45:48 +0000 (0:00:00.247) 0:01:02.744 ********** 2026-03-08 00:47:11.700780 | orchestrator | 2026-03-08 00:47:11.700787 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-08 00:47:11.700794 | orchestrator | Sunday 08 March 2026 00:45:48 +0000 (0:00:00.074) 0:01:02.818 ********** 2026-03-08 00:47:11.700801 | orchestrator | 2026-03-08 00:47:11.700807 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-08 00:47:11.700814 | orchestrator | Sunday 08 March 2026 00:45:48 +0000 (0:00:00.079) 0:01:02.898 ********** 2026-03-08 00:47:11.700821 | orchestrator | 2026-03-08 00:47:11.700828 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-08 00:47:11.700834 | orchestrator | Sunday 08 March 2026 00:45:49 +0000 (0:00:00.083) 0:01:02.982 ********** 2026-03-08 00:47:11.700841 | orchestrator | 2026-03-08 00:47:11.700850 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-08 00:47:11.700858 | orchestrator | Sunday 08 March 2026 00:45:49 +0000 (0:00:00.101) 0:01:03.083 ********** 2026-03-08 00:47:11.700867 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:47:11.700873 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:47:11.700880 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:47:11.700887 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:47:11.700894 | orchestrator | changed: [testbed-manager] 2026-03-08 00:47:11.700901 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:47:11.700907 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:47:11.700914 | orchestrator | 2026-03-08 00:47:11.700921 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-08 00:47:11.700928 | orchestrator | Sunday 08 March 2026 00:46:25 +0000 (0:00:36.099) 0:01:39.182 ********** 2026-03-08 00:47:11.700934 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:47:11.700941 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:47:11.700948 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:47:11.700954 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:47:11.700961 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:47:11.700968 | orchestrator | changed: [testbed-manager] 2026-03-08 00:47:11.700975 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:47:11.700983 | orchestrator | 2026-03-08 00:47:11.700989 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-08 00:47:11.700999 | orchestrator | Sunday 08 March 2026 00:46:58 +0000 (0:00:33.780) 0:02:12.963 ********** 2026-03-08 00:47:11.701019 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:47:11.701026 | orchestrator | ok: [testbed-manager] 2026-03-08 00:47:11.701033 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:47:11.701040 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:47:11.701047 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:47:11.701053 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:47:11.701060 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:47:11.701067 | orchestrator | 2026-03-08 00:47:11.701074 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-08 00:47:11.701081 | orchestrator | Sunday 08 March 2026 00:47:01 +0000 (0:00:02.110) 0:02:15.073 ********** 2026-03-08 00:47:11.701087 | orchestrator | changed: [testbed-manager] 2026-03-08 00:47:11.701094 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:47:11.701101 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:47:11.701108 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:47:11.701115 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:47:11.701125 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:47:11.701132 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:47:11.701139 | orchestrator | 2026-03-08 00:47:11.701146 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:47:11.701155 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-08 00:47:11.701163 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-08 00:47:11.701170 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-08 00:47:11.701181 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-08 00:47:11.701188 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-08 00:47:11.701195 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-08 00:47:11.701202 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-08 00:47:11.701208 | orchestrator | 2026-03-08 00:47:11.701215 | orchestrator | 2026-03-08 00:47:11.701222 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:47:11.701229 | orchestrator | Sunday 08 March 2026 00:47:10 +0000 (0:00:09.193) 0:02:24.267 ********** 2026-03-08 00:47:11.701235 | orchestrator | =============================================================================== 2026-03-08 00:47:11.701242 | orchestrator | common : Restart fluentd container ------------------------------------- 36.10s 2026-03-08 00:47:11.701249 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 33.78s 2026-03-08 00:47:11.701256 | orchestrator | common : Restart cron container ----------------------------------------- 9.19s 2026-03-08 00:47:11.701263 | orchestrator | common : Copying over config.json files for services -------------------- 7.81s 2026-03-08 00:47:11.701271 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.41s 2026-03-08 00:47:11.701278 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.48s 2026-03-08 00:47:11.701285 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.21s 2026-03-08 00:47:11.701292 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 4.16s 2026-03-08 00:47:11.701299 | orchestrator | common : Ensuring config directories have correct owner and permission --- 4.03s 2026-03-08 00:47:11.701305 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.95s 2026-03-08 00:47:11.701312 | orchestrator | common : Check common containers ---------------------------------------- 3.54s 2026-03-08 00:47:11.701319 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.20s 2026-03-08 00:47:11.701326 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.95s 2026-03-08 00:47:11.701332 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.58s 2026-03-08 00:47:11.701339 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.11s 2026-03-08 00:47:11.701346 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.97s 2026-03-08 00:47:11.701353 | orchestrator | common : Creating log volume -------------------------------------------- 1.94s 2026-03-08 00:47:11.701359 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.89s 2026-03-08 00:47:11.701366 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.79s 2026-03-08 00:47:11.701377 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.53s 2026-03-08 00:47:11.701384 | orchestrator | 2026-03-08 00:47:11 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:14.719388 | orchestrator | 2026-03-08 00:47:14 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:47:14.720191 | orchestrator | 2026-03-08 00:47:14 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:47:14.720756 | orchestrator | 2026-03-08 00:47:14 | INFO  | Task 78ec6d8d-9319-4082-bd6d-0db0a6c27236 is in state STARTED 2026-03-08 00:47:14.721616 | orchestrator | 2026-03-08 00:47:14 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:47:14.722313 | orchestrator | 2026-03-08 00:47:14 | INFO  | Task 2fc282c2-b62e-4c44-be6b-96ed53a898d2 is in state STARTED 2026-03-08 00:47:14.723127 | orchestrator | 2026-03-08 00:47:14 | INFO  | Task 05cbe363-8afa-4388-a454-af259605f4fa is in state STARTED 2026-03-08 00:47:14.723140 | orchestrator | 2026-03-08 00:47:14 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:17.758101 | orchestrator | 2026-03-08 00:47:17 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:47:17.758214 | orchestrator | 2026-03-08 00:47:17 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:47:17.758714 | orchestrator | 2026-03-08 00:47:17 | INFO  | Task 78ec6d8d-9319-4082-bd6d-0db0a6c27236 is in state STARTED 2026-03-08 00:47:17.759390 | orchestrator | 2026-03-08 00:47:17 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:47:17.759905 | orchestrator | 2026-03-08 00:47:17 | INFO  | Task 2fc282c2-b62e-4c44-be6b-96ed53a898d2 is in state STARTED 2026-03-08 00:47:17.760837 | orchestrator | 2026-03-08 00:47:17 | INFO  | Task 05cbe363-8afa-4388-a454-af259605f4fa is in state STARTED 2026-03-08 00:47:17.760883 | orchestrator | 2026-03-08 00:47:17 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:20.805253 | orchestrator | 2026-03-08 00:47:20 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:47:20.808537 | orchestrator | 2026-03-08 00:47:20 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:47:20.808885 | orchestrator | 2026-03-08 00:47:20 | INFO  | Task 78ec6d8d-9319-4082-bd6d-0db0a6c27236 is in state STARTED 2026-03-08 00:47:20.809585 | orchestrator | 2026-03-08 00:47:20 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:47:20.810450 | orchestrator | 2026-03-08 00:47:20 | INFO  | Task 2fc282c2-b62e-4c44-be6b-96ed53a898d2 is in state STARTED 2026-03-08 00:47:20.812748 | orchestrator | 2026-03-08 00:47:20 | INFO  | Task 05cbe363-8afa-4388-a454-af259605f4fa is in state STARTED 2026-03-08 00:47:20.812823 | orchestrator | 2026-03-08 00:47:20 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:23.841157 | orchestrator | 2026-03-08 00:47:23 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:47:23.842523 | orchestrator | 2026-03-08 00:47:23 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:47:23.843427 | orchestrator | 2026-03-08 00:47:23 | INFO  | Task 78ec6d8d-9319-4082-bd6d-0db0a6c27236 is in state STARTED 2026-03-08 00:47:23.845274 | orchestrator | 2026-03-08 00:47:23 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:47:23.845942 | orchestrator | 2026-03-08 00:47:23 | INFO  | Task 2fc282c2-b62e-4c44-be6b-96ed53a898d2 is in state STARTED 2026-03-08 00:47:23.847037 | orchestrator | 2026-03-08 00:47:23 | INFO  | Task 05cbe363-8afa-4388-a454-af259605f4fa is in state STARTED 2026-03-08 00:47:23.847146 | orchestrator | 2026-03-08 00:47:23 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:26.900766 | orchestrator | 2026-03-08 00:47:26 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:47:26.901654 | orchestrator | 2026-03-08 00:47:26 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:47:26.902755 | orchestrator | 2026-03-08 00:47:26 | INFO  | Task 78ec6d8d-9319-4082-bd6d-0db0a6c27236 is in state STARTED 2026-03-08 00:47:26.903740 | orchestrator | 2026-03-08 00:47:26 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:47:26.904509 | orchestrator | 2026-03-08 00:47:26 | INFO  | Task 2fc282c2-b62e-4c44-be6b-96ed53a898d2 is in state STARTED 2026-03-08 00:47:26.905183 | orchestrator | 2026-03-08 00:47:26 | INFO  | Task 05cbe363-8afa-4388-a454-af259605f4fa is in state STARTED 2026-03-08 00:47:26.905532 | orchestrator | 2026-03-08 00:47:26 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:29.956145 | orchestrator | 2026-03-08 00:47:29 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:47:29.956226 | orchestrator | 2026-03-08 00:47:29 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:47:29.956250 | orchestrator | 2026-03-08 00:47:29 | INFO  | Task 78ec6d8d-9319-4082-bd6d-0db0a6c27236 is in state STARTED 2026-03-08 00:47:29.958836 | orchestrator | 2026-03-08 00:47:29 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:47:29.961321 | orchestrator | 2026-03-08 00:47:29 | INFO  | Task 2fc282c2-b62e-4c44-be6b-96ed53a898d2 is in state SUCCESS 2026-03-08 00:47:29.961844 | orchestrator | 2026-03-08 00:47:29 | INFO  | Task 05cbe363-8afa-4388-a454-af259605f4fa is in state STARTED 2026-03-08 00:47:29.961877 | orchestrator | 2026-03-08 00:47:29 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:33.000730 | orchestrator | 2026-03-08 00:47:33 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:47:33.001131 | orchestrator | 2026-03-08 00:47:33 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:47:33.001845 | orchestrator | 2026-03-08 00:47:33 | INFO  | Task 78ec6d8d-9319-4082-bd6d-0db0a6c27236 is in state SUCCESS 2026-03-08 00:47:33.003240 | orchestrator | 2026-03-08 00:47:33.003311 | orchestrator | 2026-03-08 00:47:33.003327 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 00:47:33.003337 | orchestrator | 2026-03-08 00:47:33.003347 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 00:47:33.003357 | orchestrator | Sunday 08 March 2026 00:47:14 +0000 (0:00:00.246) 0:00:00.246 ********** 2026-03-08 00:47:33.003366 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:47:33.003376 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:47:33.003385 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:47:33.003394 | orchestrator | 2026-03-08 00:47:33.003403 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 00:47:33.003412 | orchestrator | Sunday 08 March 2026 00:47:15 +0000 (0:00:00.313) 0:00:00.559 ********** 2026-03-08 00:47:33.003421 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-08 00:47:33.003430 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-08 00:47:33.003439 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-08 00:47:33.003448 | orchestrator | 2026-03-08 00:47:33.003458 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-08 00:47:33.003467 | orchestrator | 2026-03-08 00:47:33.003476 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-08 00:47:33.003509 | orchestrator | Sunday 08 March 2026 00:47:15 +0000 (0:00:00.456) 0:00:01.016 ********** 2026-03-08 00:47:33.003518 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:47:33.003528 | orchestrator | 2026-03-08 00:47:33.003537 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-08 00:47:33.003545 | orchestrator | Sunday 08 March 2026 00:47:16 +0000 (0:00:00.468) 0:00:01.484 ********** 2026-03-08 00:47:33.003554 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-08 00:47:33.003563 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-08 00:47:33.003572 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-08 00:47:33.003580 | orchestrator | 2026-03-08 00:47:33.003589 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-08 00:47:33.003598 | orchestrator | Sunday 08 March 2026 00:47:16 +0000 (0:00:00.765) 0:00:02.249 ********** 2026-03-08 00:47:33.003607 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-08 00:47:33.003616 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-08 00:47:33.003624 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-08 00:47:33.003633 | orchestrator | 2026-03-08 00:47:33.003642 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-08 00:47:33.003650 | orchestrator | Sunday 08 March 2026 00:47:18 +0000 (0:00:01.897) 0:00:04.147 ********** 2026-03-08 00:47:33.003812 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:47:33.003834 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:47:33.003849 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:47:33.003862 | orchestrator | 2026-03-08 00:47:33.003876 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-08 00:47:33.003890 | orchestrator | Sunday 08 March 2026 00:47:20 +0000 (0:00:02.036) 0:00:06.184 ********** 2026-03-08 00:47:33.003905 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:47:33.003920 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:47:33.003935 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:47:33.003949 | orchestrator | 2026-03-08 00:47:33.003958 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:47:33.003968 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:47:33.004006 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:47:33.004016 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:47:33.004024 | orchestrator | 2026-03-08 00:47:33.004033 | orchestrator | 2026-03-08 00:47:33.004042 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:47:33.004051 | orchestrator | Sunday 08 March 2026 00:47:28 +0000 (0:00:08.138) 0:00:14.323 ********** 2026-03-08 00:47:33.004059 | orchestrator | =============================================================================== 2026-03-08 00:47:33.004068 | orchestrator | memcached : Restart memcached container --------------------------------- 8.14s 2026-03-08 00:47:33.004091 | orchestrator | memcached : Check memcached container ----------------------------------- 2.04s 2026-03-08 00:47:33.004100 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.90s 2026-03-08 00:47:33.004108 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.77s 2026-03-08 00:47:33.004117 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.47s 2026-03-08 00:47:33.004126 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2026-03-08 00:47:33.004134 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-03-08 00:47:33.004143 | orchestrator | 2026-03-08 00:47:33.004151 | orchestrator | 2026-03-08 00:47:33.004160 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 00:47:33.004180 | orchestrator | 2026-03-08 00:47:33.004189 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 00:47:33.004197 | orchestrator | Sunday 08 March 2026 00:47:14 +0000 (0:00:00.246) 0:00:00.246 ********** 2026-03-08 00:47:33.004206 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:47:33.004215 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:47:33.004224 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:47:33.004233 | orchestrator | 2026-03-08 00:47:33.004242 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 00:47:33.004268 | orchestrator | Sunday 08 March 2026 00:47:15 +0000 (0:00:00.277) 0:00:00.523 ********** 2026-03-08 00:47:33.004277 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-08 00:47:33.004285 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-08 00:47:33.004294 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-08 00:47:33.004302 | orchestrator | 2026-03-08 00:47:33.004311 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-08 00:47:33.004320 | orchestrator | 2026-03-08 00:47:33.004328 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-08 00:47:33.004337 | orchestrator | Sunday 08 March 2026 00:47:15 +0000 (0:00:00.474) 0:00:00.998 ********** 2026-03-08 00:47:33.004345 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:47:33.004355 | orchestrator | 2026-03-08 00:47:33.004363 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-08 00:47:33.004372 | orchestrator | Sunday 08 March 2026 00:47:16 +0000 (0:00:00.637) 0:00:01.636 ********** 2026-03-08 00:47:33.004383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-08 00:47:33.004398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-08 00:47:33.004408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-08 00:47:33.004417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-08 00:47:33.004433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-08 00:47:33.004465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-08 00:47:33.004481 | orchestrator | 2026-03-08 00:47:33.004497 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-08 00:47:33.004514 | orchestrator | Sunday 08 March 2026 00:47:17 +0000 (0:00:01.182) 0:00:02.819 ********** 2026-03-08 00:47:33.004531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-08 00:47:33.004548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-08 00:47:33.004566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-08 00:47:33.004582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-08 00:47:33.004609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-08 00:47:33.004637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-08 00:47:33.004648 | orchestrator | 2026-03-08 00:47:33.004660 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-08 00:47:33.004671 | orchestrator | Sunday 08 March 2026 00:47:19 +0000 (0:00:02.529) 0:00:05.348 ********** 2026-03-08 00:47:33.004681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-08 00:47:33.004692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-08 00:47:33.004704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-08 00:47:33.004714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-08 00:47:33.004735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-08 00:47:33.004753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-08 00:47:33.004765 | orchestrator | 2026-03-08 00:47:33.004775 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-08 00:47:33.004785 | orchestrator | Sunday 08 March 2026 00:47:22 +0000 (0:00:02.834) 0:00:08.182 ********** 2026-03-08 00:47:33.004796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-08 00:47:33.004806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-08 00:47:33.004817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-08 00:47:33.004826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-08 00:47:33.004844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-08 00:47:33.004860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-08 00:47:33.004869 | orchestrator | 2026-03-08 00:47:33.004879 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-08 00:47:33.004888 | orchestrator | Sunday 08 March 2026 00:47:24 +0000 (0:00:01.900) 0:00:10.083 ********** 2026-03-08 00:47:33.004896 | orchestrator | 2026-03-08 00:47:33.004905 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-08 00:47:33.004914 | orchestrator | Sunday 08 March 2026 00:47:24 +0000 (0:00:00.063) 0:00:10.147 ********** 2026-03-08 00:47:33.004923 | orchestrator | 2026-03-08 00:47:33.004931 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-08 00:47:33.004941 | orchestrator | Sunday 08 March 2026 00:47:24 +0000 (0:00:00.073) 0:00:10.221 ********** 2026-03-08 00:47:33.004949 | orchestrator | 2026-03-08 00:47:33.004958 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-08 00:47:33.004967 | orchestrator | Sunday 08 March 2026 00:47:24 +0000 (0:00:00.079) 0:00:10.300 ********** 2026-03-08 00:47:33.004999 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:47:33.005016 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:47:33.005031 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:47:33.005046 | orchestrator | 2026-03-08 00:47:33.005060 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-08 00:47:33.005075 | orchestrator | Sunday 08 March 2026 00:47:28 +0000 (0:00:03.253) 0:00:13.554 ********** 2026-03-08 00:47:33.005089 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:47:33.005099 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:47:33.005107 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:47:33.005116 | orchestrator | 2026-03-08 00:47:33.005125 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:47:33.005134 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:47:33.005149 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:47:33.005158 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:47:33.005167 | orchestrator | 2026-03-08 00:47:33.005176 | orchestrator | 2026-03-08 00:47:33.005185 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:47:33.005194 | orchestrator | Sunday 08 March 2026 00:47:31 +0000 (0:00:03.542) 0:00:17.096 ********** 2026-03-08 00:47:33.005202 | orchestrator | =============================================================================== 2026-03-08 00:47:33.005211 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.54s 2026-03-08 00:47:33.005224 | orchestrator | redis : Restart redis container ----------------------------------------- 3.25s 2026-03-08 00:47:33.005238 | orchestrator | redis : Copying over redis config files --------------------------------- 2.83s 2026-03-08 00:47:33.005251 | orchestrator | redis : Copying over default config.json files -------------------------- 2.53s 2026-03-08 00:47:33.005266 | orchestrator | redis : Check redis containers ------------------------------------------ 1.90s 2026-03-08 00:47:33.005280 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.18s 2026-03-08 00:47:33.005293 | orchestrator | redis : include_tasks --------------------------------------------------- 0.64s 2026-03-08 00:47:33.005308 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2026-03-08 00:47:33.005323 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2026-03-08 00:47:33.005337 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.22s 2026-03-08 00:47:33.005350 | orchestrator | 2026-03-08 00:47:33 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:47:33.005363 | orchestrator | 2026-03-08 00:47:33 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:47:33.006529 | orchestrator | 2026-03-08 00:47:33 | INFO  | Task 05cbe363-8afa-4388-a454-af259605f4fa is in state STARTED 2026-03-08 00:47:33.006576 | orchestrator | 2026-03-08 00:47:33 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:36.071757 | orchestrator | 2026-03-08 00:47:36 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:47:36.073038 | orchestrator | 2026-03-08 00:47:36 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:47:36.073459 | orchestrator | 2026-03-08 00:47:36 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:47:36.073949 | orchestrator | 2026-03-08 00:47:36 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:47:36.074918 | orchestrator | 2026-03-08 00:47:36 | INFO  | Task 05cbe363-8afa-4388-a454-af259605f4fa is in state STARTED 2026-03-08 00:47:36.075269 | orchestrator | 2026-03-08 00:47:36 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:39.377812 | orchestrator | 2026-03-08 00:47:39 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:47:39.377891 | orchestrator | 2026-03-08 00:47:39 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:47:39.377898 | orchestrator | 2026-03-08 00:47:39 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:47:39.377904 | orchestrator | 2026-03-08 00:47:39 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:47:39.377910 | orchestrator | 2026-03-08 00:47:39 | INFO  | Task 05cbe363-8afa-4388-a454-af259605f4fa is in state STARTED 2026-03-08 00:47:39.377916 | orchestrator | 2026-03-08 00:47:39 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:42.400283 | orchestrator | 2026-03-08 00:47:42 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:47:42.403525 | orchestrator | 2026-03-08 00:47:42 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:47:42.408593 | orchestrator | 2026-03-08 00:47:42 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:47:42.410173 | orchestrator | 2026-03-08 00:47:42 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:47:42.415950 | orchestrator | 2026-03-08 00:47:42 | INFO  | Task 05cbe363-8afa-4388-a454-af259605f4fa is in state STARTED 2026-03-08 00:47:42.416077 | orchestrator | 2026-03-08 00:47:42 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:45.454450 | orchestrator | 2026-03-08 00:47:45 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:47:45.454507 | orchestrator | 2026-03-08 00:47:45 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:47:45.454513 | orchestrator | 2026-03-08 00:47:45 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:47:45.454518 | orchestrator | 2026-03-08 00:47:45 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:47:45.454522 | orchestrator | 2026-03-08 00:47:45 | INFO  | Task 05cbe363-8afa-4388-a454-af259605f4fa is in state STARTED 2026-03-08 00:47:45.454526 | orchestrator | 2026-03-08 00:47:45 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:48.496683 | orchestrator | 2026-03-08 00:47:48 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:47:48.496938 | orchestrator | 2026-03-08 00:47:48 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:47:48.498936 | orchestrator | 2026-03-08 00:47:48 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:47:48.499657 | orchestrator | 2026-03-08 00:47:48 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:47:48.500287 | orchestrator | 2026-03-08 00:47:48 | INFO  | Task 05cbe363-8afa-4388-a454-af259605f4fa is in state STARTED 2026-03-08 00:47:48.500308 | orchestrator | 2026-03-08 00:47:48 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:51.549401 | orchestrator | 2026-03-08 00:47:51 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:47:51.549838 | orchestrator | 2026-03-08 00:47:51 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:47:51.550278 | orchestrator | 2026-03-08 00:47:51 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:47:51.550992 | orchestrator | 2026-03-08 00:47:51 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:47:51.551544 | orchestrator | 2026-03-08 00:47:51 | INFO  | Task 05cbe363-8afa-4388-a454-af259605f4fa is in state STARTED 2026-03-08 00:47:51.551596 | orchestrator | 2026-03-08 00:47:51 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:54.583826 | orchestrator | 2026-03-08 00:47:54 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:47:54.587517 | orchestrator | 2026-03-08 00:47:54 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:47:54.588008 | orchestrator | 2026-03-08 00:47:54 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:47:54.588799 | orchestrator | 2026-03-08 00:47:54 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:47:54.589510 | orchestrator | 2026-03-08 00:47:54 | INFO  | Task 05cbe363-8afa-4388-a454-af259605f4fa is in state STARTED 2026-03-08 00:47:54.589586 | orchestrator | 2026-03-08 00:47:54 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:57.622634 | orchestrator | 2026-03-08 00:47:57 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:47:57.623633 | orchestrator | 2026-03-08 00:47:57 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:47:57.625192 | orchestrator | 2026-03-08 00:47:57 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:47:57.627524 | orchestrator | 2026-03-08 00:47:57 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:47:57.628460 | orchestrator | 2026-03-08 00:47:57 | INFO  | Task 05cbe363-8afa-4388-a454-af259605f4fa is in state STARTED 2026-03-08 00:47:57.628616 | orchestrator | 2026-03-08 00:47:57 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:00.667260 | orchestrator | 2026-03-08 00:48:00 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:48:00.672067 | orchestrator | 2026-03-08 00:48:00 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:48:00.672802 | orchestrator | 2026-03-08 00:48:00 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:48:00.673885 | orchestrator | 2026-03-08 00:48:00 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:48:00.674798 | orchestrator | 2026-03-08 00:48:00 | INFO  | Task 05cbe363-8afa-4388-a454-af259605f4fa is in state STARTED 2026-03-08 00:48:00.674863 | orchestrator | 2026-03-08 00:48:00 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:03.727064 | orchestrator | 2026-03-08 00:48:03 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:48:03.727726 | orchestrator | 2026-03-08 00:48:03 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:48:03.728816 | orchestrator | 2026-03-08 00:48:03 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:48:03.729725 | orchestrator | 2026-03-08 00:48:03 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:48:03.732096 | orchestrator | 2026-03-08 00:48:03 | INFO  | Task 05cbe363-8afa-4388-a454-af259605f4fa is in state STARTED 2026-03-08 00:48:03.732149 | orchestrator | 2026-03-08 00:48:03 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:06.786784 | orchestrator | 2026-03-08 00:48:06 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:48:06.788920 | orchestrator | 2026-03-08 00:48:06 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:48:06.791386 | orchestrator | 2026-03-08 00:48:06 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:48:06.793683 | orchestrator | 2026-03-08 00:48:06 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:48:06.796123 | orchestrator | 2026-03-08 00:48:06 | INFO  | Task 05cbe363-8afa-4388-a454-af259605f4fa is in state STARTED 2026-03-08 00:48:06.796372 | orchestrator | 2026-03-08 00:48:06 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:09.838693 | orchestrator | 2026-03-08 00:48:09 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:48:09.838755 | orchestrator | 2026-03-08 00:48:09 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:48:09.838837 | orchestrator | 2026-03-08 00:48:09 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:48:09.841829 | orchestrator | 2026-03-08 00:48:09 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:48:09.841918 | orchestrator | 2026-03-08 00:48:09 | INFO  | Task 05cbe363-8afa-4388-a454-af259605f4fa is in state STARTED 2026-03-08 00:48:09.842458 | orchestrator | 2026-03-08 00:48:09 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:12.896330 | orchestrator | 2026-03-08 00:48:12 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:48:12.901479 | orchestrator | 2026-03-08 00:48:12 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:48:12.903268 | orchestrator | 2026-03-08 00:48:12 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:48:12.906544 | orchestrator | 2026-03-08 00:48:12 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:48:12.909245 | orchestrator | 2026-03-08 00:48:12 | INFO  | Task 05cbe363-8afa-4388-a454-af259605f4fa is in state STARTED 2026-03-08 00:48:12.909323 | orchestrator | 2026-03-08 00:48:12 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:15.975773 | orchestrator | 2026-03-08 00:48:15 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:48:15.976584 | orchestrator | 2026-03-08 00:48:15 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:48:15.978235 | orchestrator | 2026-03-08 00:48:15 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:48:15.979134 | orchestrator | 2026-03-08 00:48:15 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:48:15.979878 | orchestrator | 2026-03-08 00:48:15 | INFO  | Task 05cbe363-8afa-4388-a454-af259605f4fa is in state STARTED 2026-03-08 00:48:15.979912 | orchestrator | 2026-03-08 00:48:15 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:19.033566 | orchestrator | 2026-03-08 00:48:19 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:48:19.044597 | orchestrator | 2026-03-08 00:48:19 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:48:19.045665 | orchestrator | 2026-03-08 00:48:19 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:48:19.046579 | orchestrator | 2026-03-08 00:48:19 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:48:19.051553 | orchestrator | 2026-03-08 00:48:19 | INFO  | Task 05cbe363-8afa-4388-a454-af259605f4fa is in state STARTED 2026-03-08 00:48:19.051590 | orchestrator | 2026-03-08 00:48:19 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:22.095678 | orchestrator | 2026-03-08 00:48:22 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:48:22.096444 | orchestrator | 2026-03-08 00:48:22 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:48:22.099125 | orchestrator | 2026-03-08 00:48:22 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:48:22.100420 | orchestrator | 2026-03-08 00:48:22 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:48:22.101399 | orchestrator | 2026-03-08 00:48:22 | INFO  | Task 05cbe363-8afa-4388-a454-af259605f4fa is in state STARTED 2026-03-08 00:48:22.101997 | orchestrator | 2026-03-08 00:48:22 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:25.142801 | orchestrator | 2026-03-08 00:48:25 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:48:25.143686 | orchestrator | 2026-03-08 00:48:25 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:48:25.144726 | orchestrator | 2026-03-08 00:48:25 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:48:25.145918 | orchestrator | 2026-03-08 00:48:25 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:48:25.148019 | orchestrator | 2026-03-08 00:48:25 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:48:25.151000 | orchestrator | 2026-03-08 00:48:25 | INFO  | Task 05cbe363-8afa-4388-a454-af259605f4fa is in state SUCCESS 2026-03-08 00:48:25.152094 | orchestrator | 2026-03-08 00:48:25.152166 | orchestrator | 2026-03-08 00:48:25.152177 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 00:48:25.152186 | orchestrator | 2026-03-08 00:48:25.152193 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 00:48:25.152201 | orchestrator | Sunday 08 March 2026 00:47:14 +0000 (0:00:00.259) 0:00:00.259 ********** 2026-03-08 00:48:25.152208 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:48:25.152216 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:48:25.152224 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:48:25.152231 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:48:25.152238 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:48:25.152255 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:48:25.152262 | orchestrator | 2026-03-08 00:48:25.152269 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 00:48:25.152276 | orchestrator | Sunday 08 March 2026 00:47:15 +0000 (0:00:00.845) 0:00:01.104 ********** 2026-03-08 00:48:25.152283 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-08 00:48:25.152290 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-08 00:48:25.152297 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-08 00:48:25.152304 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-08 00:48:25.152311 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-08 00:48:25.152318 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-08 00:48:25.152325 | orchestrator | 2026-03-08 00:48:25.152332 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-08 00:48:25.152339 | orchestrator | 2026-03-08 00:48:25.152346 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-08 00:48:25.152353 | orchestrator | Sunday 08 March 2026 00:47:16 +0000 (0:00:01.019) 0:00:02.124 ********** 2026-03-08 00:48:25.152360 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:48:25.152368 | orchestrator | 2026-03-08 00:48:25.152375 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-08 00:48:25.152406 | orchestrator | Sunday 08 March 2026 00:47:17 +0000 (0:00:01.229) 0:00:03.353 ********** 2026-03-08 00:48:25.152413 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-08 00:48:25.152420 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-08 00:48:25.152428 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-08 00:48:25.152443 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-08 00:48:25.152450 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-08 00:48:25.152456 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-08 00:48:25.152462 | orchestrator | 2026-03-08 00:48:25.152469 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-08 00:48:25.152488 | orchestrator | Sunday 08 March 2026 00:47:18 +0000 (0:00:01.165) 0:00:04.519 ********** 2026-03-08 00:48:25.152495 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-08 00:48:25.152501 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-08 00:48:25.152507 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-08 00:48:25.152514 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-08 00:48:25.152520 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-08 00:48:25.152527 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-08 00:48:25.152533 | orchestrator | 2026-03-08 00:48:25.152539 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-08 00:48:25.152546 | orchestrator | Sunday 08 March 2026 00:47:20 +0000 (0:00:01.657) 0:00:06.177 ********** 2026-03-08 00:48:25.152552 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-08 00:48:25.152558 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:48:25.152565 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-08 00:48:25.152571 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:48:25.152577 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-08 00:48:25.152584 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:48:25.152590 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-08 00:48:25.152597 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:48:25.152603 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-08 00:48:25.152609 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:48:25.152616 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-08 00:48:25.152622 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:48:25.152628 | orchestrator | 2026-03-08 00:48:25.152635 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-08 00:48:25.152641 | orchestrator | Sunday 08 March 2026 00:47:22 +0000 (0:00:01.788) 0:00:07.965 ********** 2026-03-08 00:48:25.152647 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:48:25.152653 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:48:25.152659 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:48:25.152675 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:48:25.152681 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:48:25.152688 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:48:25.152694 | orchestrator | 2026-03-08 00:48:25.152701 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-08 00:48:25.152707 | orchestrator | Sunday 08 March 2026 00:47:23 +0000 (0:00:00.880) 0:00:08.846 ********** 2026-03-08 00:48:25.152732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:25.152743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:25.152755 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:25.152763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:25.152770 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:25.152780 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:25.152791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:25.152798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:25.152809 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:25.152816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:25.152823 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:25.152833 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:25.152841 | orchestrator | 2026-03-08 00:48:25.152848 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-08 00:48:25.152862 | orchestrator | Sunday 08 March 2026 00:47:25 +0000 (0:00:01.754) 0:00:10.600 ********** 2026-03-08 00:48:25.152870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:25.152880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:25.152891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:25.152898 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:25.152917 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:25.152931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:25.152942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:25.152949 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:25.152955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:25.152962 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:25.152972 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:25.152982 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:25.152992 | orchestrator | 2026-03-08 00:48:25.152999 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-08 00:48:25.153006 | orchestrator | Sunday 08 March 2026 00:47:29 +0000 (0:00:04.251) 0:00:14.852 ********** 2026-03-08 00:48:25.153013 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:48:25.153019 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:48:25.153026 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:48:25.153033 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:48:25.153039 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:48:25.153045 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:48:25.153051 | orchestrator | 2026-03-08 00:48:25.153058 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-08 00:48:25.153064 | orchestrator | Sunday 08 March 2026 00:47:30 +0000 (0:00:01.415) 0:00:16.267 ********** 2026-03-08 00:48:25.153071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:25.153077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:25.153084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:25.153095 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:25.153108 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:25.153115 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:25.153121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:25.153128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:25.153134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:25.153149 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:25.153165 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:25.153171 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:25.153177 | orchestrator | 2026-03-08 00:48:25.153184 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-08 00:48:25.153189 | orchestrator | Sunday 08 March 2026 00:47:33 +0000 (0:00:03.012) 0:00:19.280 ********** 2026-03-08 00:48:25.153195 | orchestrator | 2026-03-08 00:48:25.153201 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-08 00:48:25.153207 | orchestrator | Sunday 08 March 2026 00:47:33 +0000 (0:00:00.206) 0:00:19.486 ********** 2026-03-08 00:48:25.153213 | orchestrator | 2026-03-08 00:48:25.153219 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-08 00:48:25.153224 | orchestrator | Sunday 08 March 2026 00:47:34 +0000 (0:00:00.293) 0:00:19.779 ********** 2026-03-08 00:48:25.153230 | orchestrator | 2026-03-08 00:48:25.153236 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-08 00:48:25.153243 | orchestrator | Sunday 08 March 2026 00:47:34 +0000 (0:00:00.474) 0:00:20.254 ********** 2026-03-08 00:48:25.153249 | orchestrator | 2026-03-08 00:48:25.153256 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-08 00:48:25.153263 | orchestrator | Sunday 08 March 2026 00:47:35 +0000 (0:00:00.502) 0:00:20.756 ********** 2026-03-08 00:48:25.153269 | orchestrator | 2026-03-08 00:48:25.153276 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-08 00:48:25.153283 | orchestrator | Sunday 08 March 2026 00:47:35 +0000 (0:00:00.249) 0:00:21.006 ********** 2026-03-08 00:48:25.153289 | orchestrator | 2026-03-08 00:48:25.153295 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-08 00:48:25.153301 | orchestrator | Sunday 08 March 2026 00:47:35 +0000 (0:00:00.307) 0:00:21.313 ********** 2026-03-08 00:48:25.153305 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:48:25.153309 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:48:25.153313 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:48:25.153322 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:48:25.153328 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:48:25.153334 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:48:25.153341 | orchestrator | 2026-03-08 00:48:25.153348 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-08 00:48:25.153354 | orchestrator | Sunday 08 March 2026 00:47:48 +0000 (0:00:12.837) 0:00:34.151 ********** 2026-03-08 00:48:25.153360 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:48:25.153367 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:48:25.153373 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:48:25.153380 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:48:25.153386 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:48:25.153393 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:48:25.153398 | orchestrator | 2026-03-08 00:48:25.153402 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-08 00:48:25.153406 | orchestrator | Sunday 08 March 2026 00:47:50 +0000 (0:00:01.550) 0:00:35.701 ********** 2026-03-08 00:48:25.153410 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:48:25.153414 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:48:25.153417 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:48:25.153421 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:48:25.153425 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:48:25.153428 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:48:25.153432 | orchestrator | 2026-03-08 00:48:25.153436 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-08 00:48:25.153440 | orchestrator | Sunday 08 March 2026 00:47:55 +0000 (0:00:05.207) 0:00:40.909 ********** 2026-03-08 00:48:25.153447 | orchestrator | changed: [testbed-node-0] => (item={'col': 'e2026-03-08 00:48:25 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:25.153537 | orchestrator | xternal_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-08 00:48:25.153548 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-08 00:48:25.153554 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-08 00:48:25.153565 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-08 00:48:25.153571 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-08 00:48:25.153576 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-08 00:48:25.153582 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-08 00:48:25.153588 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-08 00:48:25.153593 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-08 00:48:25.153599 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-08 00:48:25.153606 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-08 00:48:25.153612 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-08 00:48:25.153630 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-08 00:48:25.153637 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-08 00:48:25.153643 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-08 00:48:25.153649 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-08 00:48:25.153661 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-08 00:48:25.153667 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-08 00:48:25.153673 | orchestrator | 2026-03-08 00:48:25.153677 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-08 00:48:25.153681 | orchestrator | Sunday 08 March 2026 00:48:03 +0000 (0:00:08.397) 0:00:49.306 ********** 2026-03-08 00:48:25.153685 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-08 00:48:25.153688 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:48:25.153692 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-08 00:48:25.153696 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:48:25.153700 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-08 00:48:25.153704 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:48:25.153710 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-08 00:48:25.153716 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-08 00:48:25.153721 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-08 00:48:25.153729 | orchestrator | 2026-03-08 00:48:25.153738 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-08 00:48:25.153743 | orchestrator | Sunday 08 March 2026 00:48:06 +0000 (0:00:03.252) 0:00:52.558 ********** 2026-03-08 00:48:25.153749 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-08 00:48:25.153755 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:48:25.153761 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-08 00:48:25.153767 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:48:25.153774 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-08 00:48:25.153780 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:48:25.153786 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-08 00:48:25.153793 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-08 00:48:25.153799 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-08 00:48:25.153805 | orchestrator | 2026-03-08 00:48:25.153812 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-08 00:48:25.153818 | orchestrator | Sunday 08 March 2026 00:48:12 +0000 (0:00:05.024) 0:00:57.583 ********** 2026-03-08 00:48:25.153824 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:48:25.153830 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:48:25.153837 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:48:25.153851 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:48:25.153858 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:48:25.153865 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:48:25.153871 | orchestrator | 2026-03-08 00:48:25.153878 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:48:25.153885 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-08 00:48:25.153958 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-08 00:48:25.153969 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-08 00:48:25.153981 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-08 00:48:25.153988 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-08 00:48:25.154001 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-08 00:48:25.154007 | orchestrator | 2026-03-08 00:48:25.154048 | orchestrator | 2026-03-08 00:48:25.154056 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:48:25.154063 | orchestrator | Sunday 08 March 2026 00:48:23 +0000 (0:00:11.223) 0:01:08.806 ********** 2026-03-08 00:48:25.154070 | orchestrator | =============================================================================== 2026-03-08 00:48:25.154076 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 16.43s 2026-03-08 00:48:25.154082 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 12.84s 2026-03-08 00:48:25.154089 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.40s 2026-03-08 00:48:25.154095 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 5.02s 2026-03-08 00:48:25.154101 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.25s 2026-03-08 00:48:25.154107 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.25s 2026-03-08 00:48:25.154114 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.01s 2026-03-08 00:48:25.154120 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.03s 2026-03-08 00:48:25.154127 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.79s 2026-03-08 00:48:25.154133 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.75s 2026-03-08 00:48:25.154139 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.66s 2026-03-08 00:48:25.154145 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.55s 2026-03-08 00:48:25.154152 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.42s 2026-03-08 00:48:25.154158 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.23s 2026-03-08 00:48:25.154164 | orchestrator | module-load : Load modules ---------------------------------------------- 1.17s 2026-03-08 00:48:25.154170 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.02s 2026-03-08 00:48:25.154177 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.88s 2026-03-08 00:48:25.154183 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.85s 2026-03-08 00:48:28.196458 | orchestrator | 2026-03-08 00:48:28 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:48:28.197340 | orchestrator | 2026-03-08 00:48:28 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:48:28.197981 | orchestrator | 2026-03-08 00:48:28 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:48:28.198911 | orchestrator | 2026-03-08 00:48:28 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:48:28.204780 | orchestrator | 2026-03-08 00:48:28 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:48:28.204856 | orchestrator | 2026-03-08 00:48:28 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:31.238161 | orchestrator | 2026-03-08 00:48:31 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:48:31.238617 | orchestrator | 2026-03-08 00:48:31 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:48:31.239568 | orchestrator | 2026-03-08 00:48:31 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:48:31.240990 | orchestrator | 2026-03-08 00:48:31 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:48:31.242844 | orchestrator | 2026-03-08 00:48:31 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:48:31.242929 | orchestrator | 2026-03-08 00:48:31 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:34.287208 | orchestrator | 2026-03-08 00:48:34 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:48:34.287274 | orchestrator | 2026-03-08 00:48:34 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:48:34.287283 | orchestrator | 2026-03-08 00:48:34 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:48:34.287289 | orchestrator | 2026-03-08 00:48:34 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:48:34.288057 | orchestrator | 2026-03-08 00:48:34 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:48:34.288313 | orchestrator | 2026-03-08 00:48:34 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:37.331822 | orchestrator | 2026-03-08 00:48:37 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:48:37.334133 | orchestrator | 2026-03-08 00:48:37 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:48:37.335489 | orchestrator | 2026-03-08 00:48:37 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:48:37.338045 | orchestrator | 2026-03-08 00:48:37 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:48:37.339647 | orchestrator | 2026-03-08 00:48:37 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:48:37.339866 | orchestrator | 2026-03-08 00:48:37 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:40.390917 | orchestrator | 2026-03-08 00:48:40 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:48:40.390968 | orchestrator | 2026-03-08 00:48:40 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:48:40.390973 | orchestrator | 2026-03-08 00:48:40 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:48:40.391723 | orchestrator | 2026-03-08 00:48:40 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:48:40.393328 | orchestrator | 2026-03-08 00:48:40 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:48:40.394042 | orchestrator | 2026-03-08 00:48:40 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:43.422250 | orchestrator | 2026-03-08 00:48:43 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:48:43.422601 | orchestrator | 2026-03-08 00:48:43 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:48:43.424044 | orchestrator | 2026-03-08 00:48:43 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:48:43.425095 | orchestrator | 2026-03-08 00:48:43 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:48:43.425904 | orchestrator | 2026-03-08 00:48:43 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:48:43.425933 | orchestrator | 2026-03-08 00:48:43 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:46.468560 | orchestrator | 2026-03-08 00:48:46 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:48:46.470540 | orchestrator | 2026-03-08 00:48:46 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:48:46.472786 | orchestrator | 2026-03-08 00:48:46 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:48:46.474714 | orchestrator | 2026-03-08 00:48:46 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:48:46.476463 | orchestrator | 2026-03-08 00:48:46 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:48:46.476504 | orchestrator | 2026-03-08 00:48:46 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:49.555676 | orchestrator | 2026-03-08 00:48:49 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:48:49.557440 | orchestrator | 2026-03-08 00:48:49 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:48:49.558932 | orchestrator | 2026-03-08 00:48:49 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:48:49.560818 | orchestrator | 2026-03-08 00:48:49 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:48:49.563020 | orchestrator | 2026-03-08 00:48:49 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:48:49.563993 | orchestrator | 2026-03-08 00:48:49 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:52.615471 | orchestrator | 2026-03-08 00:48:52 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:48:52.618184 | orchestrator | 2026-03-08 00:48:52 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:48:52.619994 | orchestrator | 2026-03-08 00:48:52 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:48:52.623057 | orchestrator | 2026-03-08 00:48:52 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:48:52.625863 | orchestrator | 2026-03-08 00:48:52 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:48:52.627030 | orchestrator | 2026-03-08 00:48:52 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:55.664215 | orchestrator | 2026-03-08 00:48:55 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:48:55.664801 | orchestrator | 2026-03-08 00:48:55 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:48:55.667395 | orchestrator | 2026-03-08 00:48:55 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:48:55.668241 | orchestrator | 2026-03-08 00:48:55 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:48:55.669155 | orchestrator | 2026-03-08 00:48:55 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:48:55.669244 | orchestrator | 2026-03-08 00:48:55 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:58.732296 | orchestrator | 2026-03-08 00:48:58 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:48:58.732385 | orchestrator | 2026-03-08 00:48:58 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:48:58.732400 | orchestrator | 2026-03-08 00:48:58 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:48:58.732434 | orchestrator | 2026-03-08 00:48:58 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:48:58.732441 | orchestrator | 2026-03-08 00:48:58 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:48:58.732449 | orchestrator | 2026-03-08 00:48:58 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:01.766671 | orchestrator | 2026-03-08 00:49:01 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:49:01.769312 | orchestrator | 2026-03-08 00:49:01 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:49:01.769752 | orchestrator | 2026-03-08 00:49:01 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:49:01.770608 | orchestrator | 2026-03-08 00:49:01 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:49:01.771406 | orchestrator | 2026-03-08 00:49:01 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:49:01.771453 | orchestrator | 2026-03-08 00:49:01 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:04.926576 | orchestrator | 2026-03-08 00:49:04 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:49:04.927131 | orchestrator | 2026-03-08 00:49:04 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:49:04.927688 | orchestrator | 2026-03-08 00:49:04 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:49:04.928302 | orchestrator | 2026-03-08 00:49:04 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:49:04.929066 | orchestrator | 2026-03-08 00:49:04 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:49:04.930288 | orchestrator | 2026-03-08 00:49:04 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:08.256796 | orchestrator | 2026-03-08 00:49:08 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:49:08.256911 | orchestrator | 2026-03-08 00:49:08 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:49:08.256922 | orchestrator | 2026-03-08 00:49:08 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:49:08.256929 | orchestrator | 2026-03-08 00:49:08 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:49:08.256935 | orchestrator | 2026-03-08 00:49:08 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:49:08.256942 | orchestrator | 2026-03-08 00:49:08 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:11.293533 | orchestrator | 2026-03-08 00:49:11 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:49:11.294139 | orchestrator | 2026-03-08 00:49:11 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:49:11.294805 | orchestrator | 2026-03-08 00:49:11 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:49:11.296889 | orchestrator | 2026-03-08 00:49:11 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:49:11.299396 | orchestrator | 2026-03-08 00:49:11 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:49:11.299471 | orchestrator | 2026-03-08 00:49:11 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:14.369066 | orchestrator | 2026-03-08 00:49:14 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:49:14.369987 | orchestrator | 2026-03-08 00:49:14 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:49:14.371115 | orchestrator | 2026-03-08 00:49:14 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:49:14.372362 | orchestrator | 2026-03-08 00:49:14 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:49:14.373247 | orchestrator | 2026-03-08 00:49:14 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:49:14.373314 | orchestrator | 2026-03-08 00:49:14 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:17.413760 | orchestrator | 2026-03-08 00:49:17 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:49:17.416721 | orchestrator | 2026-03-08 00:49:17 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:49:17.418600 | orchestrator | 2026-03-08 00:49:17 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:49:17.420103 | orchestrator | 2026-03-08 00:49:17 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state STARTED 2026-03-08 00:49:17.421648 | orchestrator | 2026-03-08 00:49:17 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:49:17.423507 | orchestrator | 2026-03-08 00:49:17 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:20.466774 | orchestrator | 2026-03-08 00:49:20 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:49:20.468520 | orchestrator | 2026-03-08 00:49:20 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:49:20.468568 | orchestrator | 2026-03-08 00:49:20 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:49:20.470526 | orchestrator | 2026-03-08 00:49:20 | INFO  | Task 4ef5518c-ad5c-41f9-9f34-8df365d3f676 is in state SUCCESS 2026-03-08 00:49:20.471933 | orchestrator | 2026-03-08 00:49:20.471974 | orchestrator | 2026-03-08 00:49:20.471982 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-08 00:49:20.471991 | orchestrator | 2026-03-08 00:49:20.471998 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-08 00:49:20.472005 | orchestrator | Sunday 08 March 2026 00:44:46 +0000 (0:00:00.152) 0:00:00.152 ********** 2026-03-08 00:49:20.472012 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:49:20.472020 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:49:20.472026 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:49:20.472033 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:20.472040 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:20.472046 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:20.472052 | orchestrator | 2026-03-08 00:49:20.472059 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-08 00:49:20.472066 | orchestrator | Sunday 08 March 2026 00:44:47 +0000 (0:00:00.646) 0:00:00.799 ********** 2026-03-08 00:49:20.472074 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:20.472082 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:20.472089 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:20.472094 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:20.472098 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:20.472102 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:20.472106 | orchestrator | 2026-03-08 00:49:20.472110 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-08 00:49:20.472114 | orchestrator | Sunday 08 March 2026 00:44:47 +0000 (0:00:00.559) 0:00:01.359 ********** 2026-03-08 00:49:20.472118 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:20.472122 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:20.472126 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:20.472130 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:20.472134 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:20.472138 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:20.472142 | orchestrator | 2026-03-08 00:49:20.472146 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-08 00:49:20.472150 | orchestrator | Sunday 08 March 2026 00:44:48 +0000 (0:00:00.608) 0:00:01.967 ********** 2026-03-08 00:49:20.472154 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:49:20.472158 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:49:20.472161 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:49:20.472185 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:20.472189 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:20.472192 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:20.472196 | orchestrator | 2026-03-08 00:49:20.472200 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-08 00:49:20.472204 | orchestrator | Sunday 08 March 2026 00:44:50 +0000 (0:00:02.085) 0:00:04.052 ********** 2026-03-08 00:49:20.472208 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:49:20.472211 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:49:20.472215 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:49:20.472219 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:20.472223 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:20.472237 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:20.472241 | orchestrator | 2026-03-08 00:49:20.472244 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-08 00:49:20.472248 | orchestrator | Sunday 08 March 2026 00:44:51 +0000 (0:00:01.356) 0:00:05.409 ********** 2026-03-08 00:49:20.472252 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:49:20.472274 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:49:20.472278 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:49:20.472282 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:20.472285 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:20.472289 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:20.472293 | orchestrator | 2026-03-08 00:49:20.472297 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-08 00:49:20.472301 | orchestrator | Sunday 08 March 2026 00:44:52 +0000 (0:00:01.020) 0:00:06.430 ********** 2026-03-08 00:49:20.472304 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:20.472308 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:20.472312 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:20.472316 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:20.472320 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:20.472323 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:20.472327 | orchestrator | 2026-03-08 00:49:20.472331 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-08 00:49:20.472335 | orchestrator | Sunday 08 March 2026 00:44:53 +0000 (0:00:00.655) 0:00:07.085 ********** 2026-03-08 00:49:20.472338 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:20.472342 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:20.472346 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:20.472350 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:20.472353 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:20.472357 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:20.472361 | orchestrator | 2026-03-08 00:49:20.472364 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-08 00:49:20.472368 | orchestrator | Sunday 08 March 2026 00:44:53 +0000 (0:00:00.544) 0:00:07.629 ********** 2026-03-08 00:49:20.472373 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-08 00:49:20.472379 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-08 00:49:20.472385 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:20.472391 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-08 00:49:20.472397 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-08 00:49:20.472403 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:20.472409 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-08 00:49:20.472414 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-08 00:49:20.472419 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:20.472425 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-08 00:49:20.472444 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-08 00:49:20.472456 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:20.472463 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-08 00:49:20.472469 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-08 00:49:20.472476 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:20.472482 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-08 00:49:20.472488 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-08 00:49:20.472494 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:20.472500 | orchestrator | 2026-03-08 00:49:20.472506 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-08 00:49:20.472513 | orchestrator | Sunday 08 March 2026 00:44:54 +0000 (0:00:00.606) 0:00:08.236 ********** 2026-03-08 00:49:20.472519 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:20.472525 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:20.472532 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:20.472538 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:20.472545 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:20.472550 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:20.472556 | orchestrator | 2026-03-08 00:49:20.472562 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-08 00:49:20.472569 | orchestrator | Sunday 08 March 2026 00:44:55 +0000 (0:00:01.332) 0:00:09.569 ********** 2026-03-08 00:49:20.472575 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:49:20.472581 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:49:20.472588 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:49:20.472594 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:20.472600 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:20.472607 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:20.472613 | orchestrator | 2026-03-08 00:49:20.472619 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-08 00:49:20.472626 | orchestrator | Sunday 08 March 2026 00:44:56 +0000 (0:00:01.043) 0:00:10.613 ********** 2026-03-08 00:49:20.472632 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:49:20.472638 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:20.472645 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:49:20.472651 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:49:20.472657 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:20.472663 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:20.472669 | orchestrator | 2026-03-08 00:49:20.472675 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-08 00:49:20.472681 | orchestrator | Sunday 08 March 2026 00:45:02 +0000 (0:00:05.792) 0:00:16.405 ********** 2026-03-08 00:49:20.472687 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:20.472693 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:20.472699 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:20.472704 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:20.472710 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:20.472716 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:20.472722 | orchestrator | 2026-03-08 00:49:20.472728 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-08 00:49:20.472734 | orchestrator | Sunday 08 March 2026 00:45:04 +0000 (0:00:02.191) 0:00:18.597 ********** 2026-03-08 00:49:20.472740 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:20.472746 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:20.472752 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:20.472758 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:20.472764 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:20.472770 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:20.472776 | orchestrator | 2026-03-08 00:49:20.472783 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-08 00:49:20.472874 | orchestrator | Sunday 08 March 2026 00:45:08 +0000 (0:00:03.265) 0:00:21.863 ********** 2026-03-08 00:49:20.472886 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:20.472893 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:20.472899 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:20.472906 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:20.472912 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:20.472918 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:20.472925 | orchestrator | 2026-03-08 00:49:20.472932 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-08 00:49:20.472938 | orchestrator | Sunday 08 March 2026 00:45:09 +0000 (0:00:01.622) 0:00:23.485 ********** 2026-03-08 00:49:20.472944 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-08 00:49:20.472952 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-08 00:49:20.472958 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:20.472962 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-08 00:49:20.472966 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-08 00:49:20.472970 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:20.472973 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-08 00:49:20.472977 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-08 00:49:20.472981 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-08 00:49:20.472985 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-08 00:49:20.472989 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:20.472993 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-08 00:49:20.472997 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-08 00:49:20.473001 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:20.473004 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:20.473008 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-08 00:49:20.473012 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-08 00:49:20.473016 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:20.473019 | orchestrator | 2026-03-08 00:49:20.473023 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-08 00:49:20.473036 | orchestrator | Sunday 08 March 2026 00:45:12 +0000 (0:00:02.220) 0:00:25.706 ********** 2026-03-08 00:49:20.473040 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:20.473043 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:20.473047 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:20.473051 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:20.473055 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:20.473059 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:20.473062 | orchestrator | 2026-03-08 00:49:20.473066 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-08 00:49:20.473070 | orchestrator | Sunday 08 March 2026 00:45:13 +0000 (0:00:01.398) 0:00:27.104 ********** 2026-03-08 00:49:20.473074 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:20.473078 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:20.473082 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:20.473085 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:20.473089 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:20.473093 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:20.473097 | orchestrator | 2026-03-08 00:49:20.473100 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-08 00:49:20.473104 | orchestrator | 2026-03-08 00:49:20.473108 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-08 00:49:20.473112 | orchestrator | Sunday 08 March 2026 00:45:14 +0000 (0:00:01.526) 0:00:28.630 ********** 2026-03-08 00:49:20.473116 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:20.473126 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:20.473130 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:20.473134 | orchestrator | 2026-03-08 00:49:20.473138 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-08 00:49:20.473141 | orchestrator | Sunday 08 March 2026 00:45:16 +0000 (0:00:01.484) 0:00:30.115 ********** 2026-03-08 00:49:20.473145 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:20.473149 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:20.473153 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:20.473157 | orchestrator | 2026-03-08 00:49:20.473160 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-08 00:49:20.473592 | orchestrator | Sunday 08 March 2026 00:45:17 +0000 (0:00:01.243) 0:00:31.359 ********** 2026-03-08 00:49:20.473607 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:20.473611 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:20.473614 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:20.473618 | orchestrator | 2026-03-08 00:49:20.473622 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-08 00:49:20.473626 | orchestrator | Sunday 08 March 2026 00:45:18 +0000 (0:00:01.088) 0:00:32.447 ********** 2026-03-08 00:49:20.473629 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:20.473633 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:20.473637 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:20.473640 | orchestrator | 2026-03-08 00:49:20.473644 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-08 00:49:20.473648 | orchestrator | Sunday 08 March 2026 00:45:19 +0000 (0:00:00.803) 0:00:33.251 ********** 2026-03-08 00:49:20.473652 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:20.473656 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:20.473659 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:20.473663 | orchestrator | 2026-03-08 00:49:20.473667 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-08 00:49:20.473671 | orchestrator | Sunday 08 March 2026 00:45:20 +0000 (0:00:00.703) 0:00:33.954 ********** 2026-03-08 00:49:20.473675 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:20.473679 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:20.473683 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:20.473686 | orchestrator | 2026-03-08 00:49:20.473690 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-08 00:49:20.473694 | orchestrator | Sunday 08 March 2026 00:45:21 +0000 (0:00:00.996) 0:00:34.951 ********** 2026-03-08 00:49:20.473698 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:20.473702 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:20.473705 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:20.473709 | orchestrator | 2026-03-08 00:49:20.473713 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-08 00:49:20.473717 | orchestrator | Sunday 08 March 2026 00:45:22 +0000 (0:00:01.672) 0:00:36.623 ********** 2026-03-08 00:49:20.473721 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:49:20.473725 | orchestrator | 2026-03-08 00:49:20.473729 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-08 00:49:20.473733 | orchestrator | Sunday 08 March 2026 00:45:23 +0000 (0:00:00.623) 0:00:37.247 ********** 2026-03-08 00:49:20.473736 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:20.473740 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:20.473744 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:20.473747 | orchestrator | 2026-03-08 00:49:20.473751 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-08 00:49:20.473755 | orchestrator | Sunday 08 March 2026 00:45:26 +0000 (0:00:03.291) 0:00:40.539 ********** 2026-03-08 00:49:20.473759 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:20.473762 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:20.473766 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:20.473770 | orchestrator | 2026-03-08 00:49:20.473774 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-08 00:49:20.473784 | orchestrator | Sunday 08 March 2026 00:45:27 +0000 (0:00:00.846) 0:00:41.386 ********** 2026-03-08 00:49:20.473788 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:20.473792 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:20.473795 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:20.473815 | orchestrator | 2026-03-08 00:49:20.473820 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-08 00:49:20.473824 | orchestrator | Sunday 08 March 2026 00:45:28 +0000 (0:00:01.175) 0:00:42.561 ********** 2026-03-08 00:49:20.473828 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:20.473831 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:20.473836 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:20.473842 | orchestrator | 2026-03-08 00:49:20.473848 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-08 00:49:20.473861 | orchestrator | Sunday 08 March 2026 00:45:30 +0000 (0:00:01.722) 0:00:44.284 ********** 2026-03-08 00:49:20.473868 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:20.473874 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:20.473880 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:20.473885 | orchestrator | 2026-03-08 00:49:20.473891 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-08 00:49:20.473897 | orchestrator | Sunday 08 March 2026 00:45:31 +0000 (0:00:01.250) 0:00:45.535 ********** 2026-03-08 00:49:20.473903 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:20.473909 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:20.473914 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:20.473920 | orchestrator | 2026-03-08 00:49:20.473925 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-08 00:49:20.473931 | orchestrator | Sunday 08 March 2026 00:45:32 +0000 (0:00:00.722) 0:00:46.258 ********** 2026-03-08 00:49:20.473937 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:20.473942 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:20.473949 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:20.473954 | orchestrator | 2026-03-08 00:49:20.473960 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-08 00:49:20.473966 | orchestrator | Sunday 08 March 2026 00:45:34 +0000 (0:00:02.265) 0:00:48.524 ********** 2026-03-08 00:49:20.473973 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:20.473979 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:20.473985 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:20.473990 | orchestrator | 2026-03-08 00:49:20.473996 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-08 00:49:20.474001 | orchestrator | Sunday 08 March 2026 00:45:37 +0000 (0:00:03.043) 0:00:51.568 ********** 2026-03-08 00:49:20.474007 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:20.474071 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:20.474078 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:20.474082 | orchestrator | 2026-03-08 00:49:20.474091 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-08 00:49:20.474095 | orchestrator | Sunday 08 March 2026 00:45:38 +0000 (0:00:00.971) 0:00:52.540 ********** 2026-03-08 00:49:20.474099 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-08 00:49:20.474105 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-08 00:49:20.474109 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-08 00:49:20.474113 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-08 00:49:20.474117 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-08 00:49:20.474127 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-08 00:49:20.474131 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-08 00:49:20.474134 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-08 00:49:20.474138 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-08 00:49:20.474142 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-08 00:49:20.474146 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-08 00:49:20.474150 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-08 00:49:20.474153 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:20.474157 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:20.474161 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:20.474164 | orchestrator | 2026-03-08 00:49:20.474168 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-08 00:49:20.474172 | orchestrator | Sunday 08 March 2026 00:46:22 +0000 (0:00:43.508) 0:01:36.049 ********** 2026-03-08 00:49:20.474176 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:20.474180 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:20.474184 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:20.474188 | orchestrator | 2026-03-08 00:49:20.474192 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-08 00:49:20.474196 | orchestrator | Sunday 08 March 2026 00:46:22 +0000 (0:00:00.328) 0:01:36.377 ********** 2026-03-08 00:49:20.474200 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:20.474204 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:20.474208 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:20.474211 | orchestrator | 2026-03-08 00:49:20.474215 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-08 00:49:20.474219 | orchestrator | Sunday 08 March 2026 00:46:23 +0000 (0:00:01.044) 0:01:37.422 ********** 2026-03-08 00:49:20.474223 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:20.474227 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:20.474231 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:20.474234 | orchestrator | 2026-03-08 00:49:20.474245 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-08 00:49:20.474250 | orchestrator | Sunday 08 March 2026 00:46:25 +0000 (0:00:01.580) 0:01:39.002 ********** 2026-03-08 00:49:20.474253 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:20.474257 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:20.474261 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:20.474265 | orchestrator | 2026-03-08 00:49:20.474269 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-08 00:49:20.474273 | orchestrator | Sunday 08 March 2026 00:46:51 +0000 (0:00:26.429) 0:02:05.432 ********** 2026-03-08 00:49:20.474277 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:20.474280 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:20.474285 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:20.474288 | orchestrator | 2026-03-08 00:49:20.474292 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-08 00:49:20.474296 | orchestrator | Sunday 08 March 2026 00:46:52 +0000 (0:00:00.843) 0:02:06.276 ********** 2026-03-08 00:49:20.474300 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:20.474304 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:20.474311 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:20.474315 | orchestrator | 2026-03-08 00:49:20.474318 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-08 00:49:20.474322 | orchestrator | Sunday 08 March 2026 00:46:53 +0000 (0:00:00.765) 0:02:07.041 ********** 2026-03-08 00:49:20.474326 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:20.474330 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:20.474334 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:20.474338 | orchestrator | 2026-03-08 00:49:20.474341 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-08 00:49:20.474345 | orchestrator | Sunday 08 March 2026 00:46:54 +0000 (0:00:00.754) 0:02:07.795 ********** 2026-03-08 00:49:20.474349 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:20.474353 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:20.474356 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:20.474360 | orchestrator | 2026-03-08 00:49:20.474367 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-08 00:49:20.474371 | orchestrator | Sunday 08 March 2026 00:46:55 +0000 (0:00:00.986) 0:02:08.782 ********** 2026-03-08 00:49:20.474374 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:20.474378 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:20.474382 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:20.474386 | orchestrator | 2026-03-08 00:49:20.474389 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-08 00:49:20.474393 | orchestrator | Sunday 08 March 2026 00:46:55 +0000 (0:00:00.333) 0:02:09.115 ********** 2026-03-08 00:49:20.474397 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:20.474401 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:20.474405 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:20.474409 | orchestrator | 2026-03-08 00:49:20.474413 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-08 00:49:20.474417 | orchestrator | Sunday 08 March 2026 00:46:56 +0000 (0:00:00.615) 0:02:09.731 ********** 2026-03-08 00:49:20.474421 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:20.474425 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:20.474428 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:20.474432 | orchestrator | 2026-03-08 00:49:20.474436 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-08 00:49:20.474439 | orchestrator | Sunday 08 March 2026 00:46:56 +0000 (0:00:00.632) 0:02:10.363 ********** 2026-03-08 00:49:20.474443 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:20.474447 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:20.474451 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:20.474454 | orchestrator | 2026-03-08 00:49:20.474458 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-08 00:49:20.474462 | orchestrator | Sunday 08 March 2026 00:46:57 +0000 (0:00:01.167) 0:02:11.531 ********** 2026-03-08 00:49:20.474466 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:20.474470 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:20.474473 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:20.474477 | orchestrator | 2026-03-08 00:49:20.474481 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-08 00:49:20.474485 | orchestrator | Sunday 08 March 2026 00:46:58 +0000 (0:00:00.829) 0:02:12.360 ********** 2026-03-08 00:49:20.474488 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:20.474492 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:20.474496 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:20.474500 | orchestrator | 2026-03-08 00:49:20.474504 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-08 00:49:20.474508 | orchestrator | Sunday 08 March 2026 00:46:58 +0000 (0:00:00.285) 0:02:12.646 ********** 2026-03-08 00:49:20.474511 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:20.474515 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:20.474519 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:20.474523 | orchestrator | 2026-03-08 00:49:20.474530 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-08 00:49:20.474534 | orchestrator | Sunday 08 March 2026 00:46:59 +0000 (0:00:00.324) 0:02:12.971 ********** 2026-03-08 00:49:20.474538 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:20.474542 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:20.474545 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:20.474549 | orchestrator | 2026-03-08 00:49:20.474553 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-08 00:49:20.474557 | orchestrator | Sunday 08 March 2026 00:47:00 +0000 (0:00:00.985) 0:02:13.956 ********** 2026-03-08 00:49:20.474560 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:20.474564 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:20.474568 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:20.474572 | orchestrator | 2026-03-08 00:49:20.474576 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-08 00:49:20.474580 | orchestrator | Sunday 08 March 2026 00:47:00 +0000 (0:00:00.681) 0:02:14.638 ********** 2026-03-08 00:49:20.474584 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-08 00:49:20.474591 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-08 00:49:20.474595 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-08 00:49:20.474599 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-08 00:49:20.474603 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-08 00:49:20.474607 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-08 00:49:20.474611 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-08 00:49:20.474614 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-08 00:49:20.474618 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-08 00:49:20.474622 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-08 00:49:20.474626 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-08 00:49:20.474630 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-08 00:49:20.474634 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-08 00:49:20.474637 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-08 00:49:20.474644 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-08 00:49:20.474648 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-08 00:49:20.474652 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-08 00:49:20.474655 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-08 00:49:20.474662 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-08 00:49:20.474668 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-08 00:49:20.474675 | orchestrator | 2026-03-08 00:49:20.474683 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-08 00:49:20.474690 | orchestrator | 2026-03-08 00:49:20.474697 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-08 00:49:20.474704 | orchestrator | Sunday 08 March 2026 00:47:04 +0000 (0:00:03.216) 0:02:17.854 ********** 2026-03-08 00:49:20.474714 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:49:20.474722 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:49:20.474728 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:49:20.474734 | orchestrator | 2026-03-08 00:49:20.474740 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-08 00:49:20.474746 | orchestrator | Sunday 08 March 2026 00:47:04 +0000 (0:00:00.602) 0:02:18.456 ********** 2026-03-08 00:49:20.474754 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:49:20.474763 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:49:20.474771 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:49:20.474777 | orchestrator | 2026-03-08 00:49:20.474783 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-08 00:49:20.474789 | orchestrator | Sunday 08 March 2026 00:47:05 +0000 (0:00:00.653) 0:02:19.109 ********** 2026-03-08 00:49:20.474796 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:49:20.474823 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:49:20.474830 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:49:20.474835 | orchestrator | 2026-03-08 00:49:20.474841 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-08 00:49:20.474847 | orchestrator | Sunday 08 March 2026 00:47:05 +0000 (0:00:00.399) 0:02:19.509 ********** 2026-03-08 00:49:20.474853 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:49:20.474860 | orchestrator | 2026-03-08 00:49:20.474866 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-08 00:49:20.474872 | orchestrator | Sunday 08 March 2026 00:47:06 +0000 (0:00:00.705) 0:02:20.214 ********** 2026-03-08 00:49:20.474878 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:20.474885 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:20.474893 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:20.474897 | orchestrator | 2026-03-08 00:49:20.474901 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-08 00:49:20.474904 | orchestrator | Sunday 08 March 2026 00:47:06 +0000 (0:00:00.325) 0:02:20.540 ********** 2026-03-08 00:49:20.474908 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:20.474912 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:20.474916 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:20.474920 | orchestrator | 2026-03-08 00:49:20.474924 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-08 00:49:20.474927 | orchestrator | Sunday 08 March 2026 00:47:07 +0000 (0:00:00.344) 0:02:20.884 ********** 2026-03-08 00:49:20.474931 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:20.474935 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:20.474939 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:20.474942 | orchestrator | 2026-03-08 00:49:20.474946 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-08 00:49:20.474950 | orchestrator | Sunday 08 March 2026 00:47:07 +0000 (0:00:00.302) 0:02:21.187 ********** 2026-03-08 00:49:20.474954 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:49:20.474957 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:49:20.474961 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:49:20.474965 | orchestrator | 2026-03-08 00:49:20.474974 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-08 00:49:20.474978 | orchestrator | Sunday 08 March 2026 00:47:08 +0000 (0:00:00.902) 0:02:22.090 ********** 2026-03-08 00:49:20.474982 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:49:20.474985 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:49:20.474989 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:49:20.474993 | orchestrator | 2026-03-08 00:49:20.474997 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-08 00:49:20.475000 | orchestrator | Sunday 08 March 2026 00:47:09 +0000 (0:00:01.065) 0:02:23.156 ********** 2026-03-08 00:49:20.475004 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:49:20.475008 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:49:20.475016 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:49:20.475021 | orchestrator | 2026-03-08 00:49:20.475024 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-08 00:49:20.475028 | orchestrator | Sunday 08 March 2026 00:47:10 +0000 (0:00:01.162) 0:02:24.319 ********** 2026-03-08 00:49:20.475032 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:49:20.475036 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:49:20.475040 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:49:20.475044 | orchestrator | 2026-03-08 00:49:20.475048 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-08 00:49:20.475052 | orchestrator | 2026-03-08 00:49:20.475056 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-08 00:49:20.475060 | orchestrator | Sunday 08 March 2026 00:47:20 +0000 (0:00:10.297) 0:02:34.617 ********** 2026-03-08 00:49:20.475064 | orchestrator | ok: [testbed-manager] 2026-03-08 00:49:20.475068 | orchestrator | 2026-03-08 00:49:20.475072 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-08 00:49:20.475075 | orchestrator | Sunday 08 March 2026 00:47:21 +0000 (0:00:00.694) 0:02:35.311 ********** 2026-03-08 00:49:20.475079 | orchestrator | changed: [testbed-manager] 2026-03-08 00:49:20.475083 | orchestrator | 2026-03-08 00:49:20.475091 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-08 00:49:20.475095 | orchestrator | Sunday 08 March 2026 00:47:22 +0000 (0:00:00.423) 0:02:35.735 ********** 2026-03-08 00:49:20.475099 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-08 00:49:20.475103 | orchestrator | 2026-03-08 00:49:20.475107 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-08 00:49:20.475110 | orchestrator | Sunday 08 March 2026 00:47:22 +0000 (0:00:00.656) 0:02:36.391 ********** 2026-03-08 00:49:20.475114 | orchestrator | changed: [testbed-manager] 2026-03-08 00:49:20.475118 | orchestrator | 2026-03-08 00:49:20.475122 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-08 00:49:20.475125 | orchestrator | Sunday 08 March 2026 00:47:23 +0000 (0:00:00.802) 0:02:37.193 ********** 2026-03-08 00:49:20.475129 | orchestrator | changed: [testbed-manager] 2026-03-08 00:49:20.475133 | orchestrator | 2026-03-08 00:49:20.475137 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-08 00:49:20.475141 | orchestrator | Sunday 08 March 2026 00:47:24 +0000 (0:00:00.601) 0:02:37.794 ********** 2026-03-08 00:49:20.475145 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-08 00:49:20.475149 | orchestrator | 2026-03-08 00:49:20.475152 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-08 00:49:20.475156 | orchestrator | Sunday 08 March 2026 00:47:25 +0000 (0:00:01.621) 0:02:39.416 ********** 2026-03-08 00:49:20.475160 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-08 00:49:20.475164 | orchestrator | 2026-03-08 00:49:20.475168 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-08 00:49:20.475172 | orchestrator | Sunday 08 March 2026 00:47:26 +0000 (0:00:00.902) 0:02:40.319 ********** 2026-03-08 00:49:20.475175 | orchestrator | changed: [testbed-manager] 2026-03-08 00:49:20.475179 | orchestrator | 2026-03-08 00:49:20.475183 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-08 00:49:20.475187 | orchestrator | Sunday 08 March 2026 00:47:27 +0000 (0:00:00.653) 0:02:40.973 ********** 2026-03-08 00:49:20.475191 | orchestrator | changed: [testbed-manager] 2026-03-08 00:49:20.475195 | orchestrator | 2026-03-08 00:49:20.475199 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-08 00:49:20.475203 | orchestrator | 2026-03-08 00:49:20.475206 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-08 00:49:20.475210 | orchestrator | Sunday 08 March 2026 00:47:27 +0000 (0:00:00.519) 0:02:41.493 ********** 2026-03-08 00:49:20.475214 | orchestrator | ok: [testbed-manager] 2026-03-08 00:49:20.475218 | orchestrator | 2026-03-08 00:49:20.475221 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-08 00:49:20.475232 | orchestrator | Sunday 08 March 2026 00:47:28 +0000 (0:00:00.224) 0:02:41.717 ********** 2026-03-08 00:49:20.475235 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-08 00:49:20.475239 | orchestrator | 2026-03-08 00:49:20.475243 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-08 00:49:20.475247 | orchestrator | Sunday 08 March 2026 00:47:28 +0000 (0:00:00.272) 0:02:41.989 ********** 2026-03-08 00:49:20.475251 | orchestrator | ok: [testbed-manager] 2026-03-08 00:49:20.475255 | orchestrator | 2026-03-08 00:49:20.475258 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-08 00:49:20.475262 | orchestrator | Sunday 08 March 2026 00:47:29 +0000 (0:00:01.100) 0:02:43.090 ********** 2026-03-08 00:49:20.475266 | orchestrator | ok: [testbed-manager] 2026-03-08 00:49:20.475270 | orchestrator | 2026-03-08 00:49:20.475274 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-08 00:49:20.475278 | orchestrator | Sunday 08 March 2026 00:47:30 +0000 (0:00:01.511) 0:02:44.602 ********** 2026-03-08 00:49:20.475282 | orchestrator | changed: [testbed-manager] 2026-03-08 00:49:20.475286 | orchestrator | 2026-03-08 00:49:20.475289 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-08 00:49:20.475293 | orchestrator | Sunday 08 March 2026 00:47:31 +0000 (0:00:00.742) 0:02:45.344 ********** 2026-03-08 00:49:20.475297 | orchestrator | ok: [testbed-manager] 2026-03-08 00:49:20.475301 | orchestrator | 2026-03-08 00:49:20.475308 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-08 00:49:20.475312 | orchestrator | Sunday 08 March 2026 00:47:32 +0000 (0:00:00.433) 0:02:45.778 ********** 2026-03-08 00:49:20.475316 | orchestrator | changed: [testbed-manager] 2026-03-08 00:49:20.475319 | orchestrator | 2026-03-08 00:49:20.475323 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-08 00:49:20.475327 | orchestrator | Sunday 08 March 2026 00:47:38 +0000 (0:00:06.847) 0:02:52.625 ********** 2026-03-08 00:49:20.475331 | orchestrator | changed: [testbed-manager] 2026-03-08 00:49:20.475335 | orchestrator | 2026-03-08 00:49:20.475339 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-08 00:49:20.475343 | orchestrator | Sunday 08 March 2026 00:47:53 +0000 (0:00:14.401) 0:03:07.027 ********** 2026-03-08 00:49:20.475346 | orchestrator | ok: [testbed-manager] 2026-03-08 00:49:20.475350 | orchestrator | 2026-03-08 00:49:20.475354 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-08 00:49:20.475358 | orchestrator | 2026-03-08 00:49:20.475362 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-08 00:49:20.475365 | orchestrator | Sunday 08 March 2026 00:47:53 +0000 (0:00:00.656) 0:03:07.683 ********** 2026-03-08 00:49:20.475369 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:20.475373 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:20.475377 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:20.475381 | orchestrator | 2026-03-08 00:49:20.475384 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-08 00:49:20.475388 | orchestrator | Sunday 08 March 2026 00:47:54 +0000 (0:00:00.406) 0:03:08.090 ********** 2026-03-08 00:49:20.475392 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:20.475396 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:20.475400 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:20.475403 | orchestrator | 2026-03-08 00:49:20.475407 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-08 00:49:20.475414 | orchestrator | Sunday 08 March 2026 00:47:54 +0000 (0:00:00.397) 0:03:08.488 ********** 2026-03-08 00:49:20.475418 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:49:20.475422 | orchestrator | 2026-03-08 00:49:20.475425 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-08 00:49:20.475429 | orchestrator | Sunday 08 March 2026 00:47:55 +0000 (0:00:00.691) 0:03:09.179 ********** 2026-03-08 00:49:20.475436 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-08 00:49:20.475440 | orchestrator | 2026-03-08 00:49:20.475444 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-08 00:49:20.475448 | orchestrator | Sunday 08 March 2026 00:47:56 +0000 (0:00:00.758) 0:03:09.938 ********** 2026-03-08 00:49:20.475452 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 00:49:20.475455 | orchestrator | 2026-03-08 00:49:20.475459 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-08 00:49:20.475463 | orchestrator | Sunday 08 March 2026 00:47:57 +0000 (0:00:00.814) 0:03:10.753 ********** 2026-03-08 00:49:20.475467 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:20.475471 | orchestrator | 2026-03-08 00:49:20.475475 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-08 00:49:20.475479 | orchestrator | Sunday 08 March 2026 00:47:57 +0000 (0:00:00.127) 0:03:10.880 ********** 2026-03-08 00:49:20.475483 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 00:49:20.475487 | orchestrator | 2026-03-08 00:49:20.475490 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-08 00:49:20.475494 | orchestrator | Sunday 08 March 2026 00:47:58 +0000 (0:00:01.137) 0:03:12.017 ********** 2026-03-08 00:49:20.475498 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:20.475502 | orchestrator | 2026-03-08 00:49:20.475506 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-08 00:49:20.475510 | orchestrator | Sunday 08 March 2026 00:47:58 +0000 (0:00:00.138) 0:03:12.155 ********** 2026-03-08 00:49:20.475514 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:20.475518 | orchestrator | 2026-03-08 00:49:20.475522 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-08 00:49:20.475525 | orchestrator | Sunday 08 March 2026 00:47:58 +0000 (0:00:00.150) 0:03:12.306 ********** 2026-03-08 00:49:20.475529 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:20.475533 | orchestrator | 2026-03-08 00:49:20.475537 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-08 00:49:20.475541 | orchestrator | Sunday 08 March 2026 00:47:58 +0000 (0:00:00.116) 0:03:12.423 ********** 2026-03-08 00:49:20.475545 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:20.475548 | orchestrator | 2026-03-08 00:49:20.475552 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-08 00:49:20.475556 | orchestrator | Sunday 08 March 2026 00:47:58 +0000 (0:00:00.120) 0:03:12.543 ********** 2026-03-08 00:49:20.475559 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-08 00:49:20.475563 | orchestrator | 2026-03-08 00:49:20.475567 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-08 00:49:20.475571 | orchestrator | Sunday 08 March 2026 00:48:04 +0000 (0:00:05.995) 0:03:18.538 ********** 2026-03-08 00:49:20.475575 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-08 00:49:20.475579 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-08 00:49:20.475583 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-08 00:49:20.475587 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-08 00:49:20.475591 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-08 00:49:20.475595 | orchestrator | 2026-03-08 00:49:20.475598 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-08 00:49:20.475602 | orchestrator | Sunday 08 March 2026 00:48:47 +0000 (0:00:42.754) 0:04:01.293 ********** 2026-03-08 00:49:20.475609 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 00:49:20.475613 | orchestrator | 2026-03-08 00:49:20.475616 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-08 00:49:20.475620 | orchestrator | Sunday 08 March 2026 00:48:48 +0000 (0:00:01.205) 0:04:02.498 ********** 2026-03-08 00:49:20.475628 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-08 00:49:20.475632 | orchestrator | 2026-03-08 00:49:20.475635 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-08 00:49:20.475639 | orchestrator | Sunday 08 March 2026 00:48:50 +0000 (0:00:01.745) 0:04:04.244 ********** 2026-03-08 00:49:20.475643 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-08 00:49:20.475647 | orchestrator | 2026-03-08 00:49:20.475651 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-08 00:49:20.475655 | orchestrator | Sunday 08 March 2026 00:48:51 +0000 (0:00:01.202) 0:04:05.446 ********** 2026-03-08 00:49:20.475658 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:20.475662 | orchestrator | 2026-03-08 00:49:20.475666 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-08 00:49:20.475670 | orchestrator | Sunday 08 March 2026 00:48:51 +0000 (0:00:00.144) 0:04:05.591 ********** 2026-03-08 00:49:20.475673 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-08 00:49:20.475677 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-08 00:49:20.475681 | orchestrator | 2026-03-08 00:49:20.475685 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-08 00:49:20.475689 | orchestrator | Sunday 08 March 2026 00:48:53 +0000 (0:00:01.883) 0:04:07.475 ********** 2026-03-08 00:49:20.475693 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:20.475696 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:20.475700 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:20.475704 | orchestrator | 2026-03-08 00:49:20.475710 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-08 00:49:20.475714 | orchestrator | Sunday 08 March 2026 00:48:54 +0000 (0:00:00.340) 0:04:07.815 ********** 2026-03-08 00:49:20.475718 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:20.475721 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:20.475725 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:20.475729 | orchestrator | 2026-03-08 00:49:20.475733 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-08 00:49:20.475737 | orchestrator | 2026-03-08 00:49:20.475741 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-08 00:49:20.475744 | orchestrator | Sunday 08 March 2026 00:48:55 +0000 (0:00:01.283) 0:04:09.099 ********** 2026-03-08 00:49:20.475748 | orchestrator | ok: [testbed-manager] 2026-03-08 00:49:20.475752 | orchestrator | 2026-03-08 00:49:20.475758 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-08 00:49:20.475764 | orchestrator | Sunday 08 March 2026 00:48:55 +0000 (0:00:00.178) 0:04:09.277 ********** 2026-03-08 00:49:20.475770 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-08 00:49:20.475776 | orchestrator | 2026-03-08 00:49:20.475781 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-08 00:49:20.475787 | orchestrator | Sunday 08 March 2026 00:48:55 +0000 (0:00:00.276) 0:04:09.554 ********** 2026-03-08 00:49:20.475793 | orchestrator | changed: [testbed-manager] 2026-03-08 00:49:20.475836 | orchestrator | 2026-03-08 00:49:20.475845 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-08 00:49:20.475852 | orchestrator | 2026-03-08 00:49:20.475858 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-08 00:49:20.475865 | orchestrator | Sunday 08 March 2026 00:49:01 +0000 (0:00:05.871) 0:04:15.425 ********** 2026-03-08 00:49:20.475871 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:49:20.475878 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:49:20.475884 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:49:20.475890 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:20.475896 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:20.475902 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:20.475908 | orchestrator | 2026-03-08 00:49:20.475914 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-08 00:49:20.475927 | orchestrator | Sunday 08 March 2026 00:49:02 +0000 (0:00:00.746) 0:04:16.171 ********** 2026-03-08 00:49:20.475933 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-08 00:49:20.475939 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-08 00:49:20.475945 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-08 00:49:20.475951 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-08 00:49:20.475957 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-08 00:49:20.475963 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-08 00:49:20.475970 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-08 00:49:20.475975 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-08 00:49:20.475981 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-08 00:49:20.475988 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-08 00:49:20.475994 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-08 00:49:20.476001 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-08 00:49:20.476013 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-08 00:49:20.476020 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-08 00:49:20.476024 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-08 00:49:20.476028 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-08 00:49:20.476033 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-08 00:49:20.476036 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-08 00:49:20.476040 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-08 00:49:20.476044 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-08 00:49:20.476048 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-08 00:49:20.476052 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-08 00:49:20.476056 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-08 00:49:20.476060 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-08 00:49:20.476063 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-08 00:49:20.476067 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-08 00:49:20.476071 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-08 00:49:20.476079 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-08 00:49:20.476083 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-08 00:49:20.476087 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-08 00:49:20.476090 | orchestrator | 2026-03-08 00:49:20.476095 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-08 00:49:20.476098 | orchestrator | Sunday 08 March 2026 00:49:17 +0000 (0:00:15.133) 0:04:31.305 ********** 2026-03-08 00:49:20.476102 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:20.476110 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:20.476114 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:20.476118 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:20.476121 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:20.476125 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:20.476129 | orchestrator | 2026-03-08 00:49:20.476133 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-08 00:49:20.476137 | orchestrator | Sunday 08 March 2026 00:49:18 +0000 (0:00:00.899) 0:04:32.204 ********** 2026-03-08 00:49:20.476140 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:20.476144 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:20.476148 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:20.476152 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:20.476155 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:20.476159 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:20.476163 | orchestrator | 2026-03-08 00:49:20.476167 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:49:20.476171 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:49:20.476177 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-08 00:49:20.476182 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-08 00:49:20.476186 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-08 00:49:20.476189 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-08 00:49:20.476193 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-08 00:49:20.476197 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-08 00:49:20.476201 | orchestrator | 2026-03-08 00:49:20.476205 | orchestrator | 2026-03-08 00:49:20.476209 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:49:20.476213 | orchestrator | Sunday 08 March 2026 00:49:19 +0000 (0:00:00.525) 0:04:32.730 ********** 2026-03-08 00:49:20.476217 | orchestrator | =============================================================================== 2026-03-08 00:49:20.476221 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.51s 2026-03-08 00:49:20.476225 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.75s 2026-03-08 00:49:20.476229 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.43s 2026-03-08 00:49:20.476236 | orchestrator | Manage labels ---------------------------------------------------------- 15.13s 2026-03-08 00:49:20.476240 | orchestrator | kubectl : Install required packages ------------------------------------ 14.40s 2026-03-08 00:49:20.476244 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.30s 2026-03-08 00:49:20.476247 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.85s 2026-03-08 00:49:20.476251 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 6.00s 2026-03-08 00:49:20.476255 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.87s 2026-03-08 00:49:20.476259 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.79s 2026-03-08 00:49:20.476263 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.29s 2026-03-08 00:49:20.476266 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 3.27s 2026-03-08 00:49:20.476274 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.22s 2026-03-08 00:49:20.476278 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 3.04s 2026-03-08 00:49:20.476282 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.27s 2026-03-08 00:49:20.476286 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.22s 2026-03-08 00:49:20.476290 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 2.19s 2026-03-08 00:49:20.476293 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.09s 2026-03-08 00:49:20.476297 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.88s 2026-03-08 00:49:20.476304 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.75s 2026-03-08 00:49:20.477093 | orchestrator | 2026-03-08 00:49:20 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:49:20.477115 | orchestrator | 2026-03-08 00:49:20 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:23.539163 | orchestrator | 2026-03-08 00:49:23 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:49:23.539258 | orchestrator | 2026-03-08 00:49:23 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:49:23.539271 | orchestrator | 2026-03-08 00:49:23 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:49:23.539281 | orchestrator | 2026-03-08 00:49:23 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:49:23.539289 | orchestrator | 2026-03-08 00:49:23 | INFO  | Task 23827221-e3d0-4708-a2b2-a3b59b57e515 is in state STARTED 2026-03-08 00:49:23.539297 | orchestrator | 2026-03-08 00:49:23 | INFO  | Task 158aa0c0-79ac-4c72-b45a-38e2c0c9f2ae is in state STARTED 2026-03-08 00:49:23.539307 | orchestrator | 2026-03-08 00:49:23 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:26.647598 | orchestrator | 2026-03-08 00:49:26 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:49:26.647701 | orchestrator | 2026-03-08 00:49:26 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:49:26.648584 | orchestrator | 2026-03-08 00:49:26 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:49:26.651296 | orchestrator | 2026-03-08 00:49:26 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:49:26.651904 | orchestrator | 2026-03-08 00:49:26 | INFO  | Task 23827221-e3d0-4708-a2b2-a3b59b57e515 is in state STARTED 2026-03-08 00:49:26.653063 | orchestrator | 2026-03-08 00:49:26 | INFO  | Task 158aa0c0-79ac-4c72-b45a-38e2c0c9f2ae is in state STARTED 2026-03-08 00:49:26.654234 | orchestrator | 2026-03-08 00:49:26 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:29.693937 | orchestrator | 2026-03-08 00:49:29 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:49:29.696346 | orchestrator | 2026-03-08 00:49:29 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:49:29.698226 | orchestrator | 2026-03-08 00:49:29 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:49:29.699508 | orchestrator | 2026-03-08 00:49:29 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:49:29.701579 | orchestrator | 2026-03-08 00:49:29 | INFO  | Task 23827221-e3d0-4708-a2b2-a3b59b57e515 is in state SUCCESS 2026-03-08 00:49:29.704732 | orchestrator | 2026-03-08 00:49:29 | INFO  | Task 158aa0c0-79ac-4c72-b45a-38e2c0c9f2ae is in state STARTED 2026-03-08 00:49:29.704821 | orchestrator | 2026-03-08 00:49:29 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:32.754720 | orchestrator | 2026-03-08 00:49:32 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:49:32.756661 | orchestrator | 2026-03-08 00:49:32 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:49:32.758398 | orchestrator | 2026-03-08 00:49:32 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:49:32.761300 | orchestrator | 2026-03-08 00:49:32 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:49:32.762765 | orchestrator | 2026-03-08 00:49:32 | INFO  | Task 158aa0c0-79ac-4c72-b45a-38e2c0c9f2ae is in state STARTED 2026-03-08 00:49:32.762872 | orchestrator | 2026-03-08 00:49:32 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:35.811374 | orchestrator | 2026-03-08 00:49:35 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:49:35.812256 | orchestrator | 2026-03-08 00:49:35 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:49:35.813037 | orchestrator | 2026-03-08 00:49:35 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:49:35.815507 | orchestrator | 2026-03-08 00:49:35 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:49:35.816012 | orchestrator | 2026-03-08 00:49:35 | INFO  | Task 158aa0c0-79ac-4c72-b45a-38e2c0c9f2ae is in state SUCCESS 2026-03-08 00:49:35.816053 | orchestrator | 2026-03-08 00:49:35 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:38.863089 | orchestrator | 2026-03-08 00:49:38 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:49:38.865167 | orchestrator | 2026-03-08 00:49:38 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:49:38.867566 | orchestrator | 2026-03-08 00:49:38 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:49:38.869584 | orchestrator | 2026-03-08 00:49:38 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:49:38.869629 | orchestrator | 2026-03-08 00:49:38 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:41.923411 | orchestrator | 2026-03-08 00:49:41 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:49:41.925247 | orchestrator | 2026-03-08 00:49:41 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:49:41.927078 | orchestrator | 2026-03-08 00:49:41 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:49:41.929007 | orchestrator | 2026-03-08 00:49:41 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:49:41.929333 | orchestrator | 2026-03-08 00:49:41 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:44.961397 | orchestrator | 2026-03-08 00:49:44 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:49:44.961995 | orchestrator | 2026-03-08 00:49:44 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:49:44.963076 | orchestrator | 2026-03-08 00:49:44 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:49:44.965781 | orchestrator | 2026-03-08 00:49:44 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:49:44.965821 | orchestrator | 2026-03-08 00:49:44 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:48.012476 | orchestrator | 2026-03-08 00:49:48 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:49:48.015646 | orchestrator | 2026-03-08 00:49:48 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:49:48.015708 | orchestrator | 2026-03-08 00:49:48 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:49:48.015720 | orchestrator | 2026-03-08 00:49:48 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:49:48.015758 | orchestrator | 2026-03-08 00:49:48 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:51.051577 | orchestrator | 2026-03-08 00:49:51 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:49:51.051678 | orchestrator | 2026-03-08 00:49:51 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:49:51.052517 | orchestrator | 2026-03-08 00:49:51 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:49:51.053556 | orchestrator | 2026-03-08 00:49:51 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:49:51.053585 | orchestrator | 2026-03-08 00:49:51 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:54.097529 | orchestrator | 2026-03-08 00:49:54 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:49:54.097999 | orchestrator | 2026-03-08 00:49:54 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:49:54.098819 | orchestrator | 2026-03-08 00:49:54 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:49:54.102245 | orchestrator | 2026-03-08 00:49:54 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:49:54.102305 | orchestrator | 2026-03-08 00:49:54 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:57.143319 | orchestrator | 2026-03-08 00:49:57 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:49:57.143424 | orchestrator | 2026-03-08 00:49:57 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:49:57.144368 | orchestrator | 2026-03-08 00:49:57 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:49:57.145193 | orchestrator | 2026-03-08 00:49:57 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:49:57.145232 | orchestrator | 2026-03-08 00:49:57 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:00.188991 | orchestrator | 2026-03-08 00:50:00 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:50:00.189152 | orchestrator | 2026-03-08 00:50:00 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:50:00.190150 | orchestrator | 2026-03-08 00:50:00 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:50:00.193002 | orchestrator | 2026-03-08 00:50:00 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:50:00.193048 | orchestrator | 2026-03-08 00:50:00 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:03.230658 | orchestrator | 2026-03-08 00:50:03 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:50:03.231332 | orchestrator | 2026-03-08 00:50:03 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:50:03.233729 | orchestrator | 2026-03-08 00:50:03 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:50:03.235168 | orchestrator | 2026-03-08 00:50:03 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:50:03.238480 | orchestrator | 2026-03-08 00:50:03 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:06.310911 | orchestrator | 2026-03-08 00:50:06 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:50:06.315347 | orchestrator | 2026-03-08 00:50:06 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:50:06.317502 | orchestrator | 2026-03-08 00:50:06 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:50:06.320102 | orchestrator | 2026-03-08 00:50:06 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state STARTED 2026-03-08 00:50:06.320166 | orchestrator | 2026-03-08 00:50:06 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:09.371540 | orchestrator | 2026-03-08 00:50:09 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:50:09.372447 | orchestrator | 2026-03-08 00:50:09 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:50:09.373610 | orchestrator | 2026-03-08 00:50:09 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:50:09.375346 | orchestrator | 2026-03-08 00:50:09 | INFO  | Task 26b2a0fa-172e-4155-ab1d-b258c5e2d8c8 is in state SUCCESS 2026-03-08 00:50:09.377234 | orchestrator | 2026-03-08 00:50:09.377288 | orchestrator | 2026-03-08 00:50:09.377300 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-08 00:50:09.377309 | orchestrator | 2026-03-08 00:50:09.377316 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-08 00:50:09.377326 | orchestrator | Sunday 08 March 2026 00:49:26 +0000 (0:00:00.165) 0:00:00.165 ********** 2026-03-08 00:50:09.377338 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-08 00:50:09.377350 | orchestrator | 2026-03-08 00:50:09.377362 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-08 00:50:09.377373 | orchestrator | Sunday 08 March 2026 00:49:26 +0000 (0:00:00.819) 0:00:00.984 ********** 2026-03-08 00:50:09.377380 | orchestrator | changed: [testbed-manager] 2026-03-08 00:50:09.377387 | orchestrator | 2026-03-08 00:50:09.377394 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-08 00:50:09.377401 | orchestrator | Sunday 08 March 2026 00:49:28 +0000 (0:00:01.389) 0:00:02.374 ********** 2026-03-08 00:50:09.377408 | orchestrator | changed: [testbed-manager] 2026-03-08 00:50:09.377415 | orchestrator | 2026-03-08 00:50:09.377421 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:50:09.377428 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:50:09.377437 | orchestrator | 2026-03-08 00:50:09.377444 | orchestrator | 2026-03-08 00:50:09.377451 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:50:09.377458 | orchestrator | Sunday 08 March 2026 00:49:28 +0000 (0:00:00.430) 0:00:02.804 ********** 2026-03-08 00:50:09.377480 | orchestrator | =============================================================================== 2026-03-08 00:50:09.377491 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.39s 2026-03-08 00:50:09.377502 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.82s 2026-03-08 00:50:09.377513 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.43s 2026-03-08 00:50:09.377524 | orchestrator | 2026-03-08 00:50:09.377534 | orchestrator | 2026-03-08 00:50:09.377545 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-08 00:50:09.377555 | orchestrator | 2026-03-08 00:50:09.377566 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-08 00:50:09.377601 | orchestrator | Sunday 08 March 2026 00:49:26 +0000 (0:00:00.218) 0:00:00.218 ********** 2026-03-08 00:50:09.377615 | orchestrator | ok: [testbed-manager] 2026-03-08 00:50:09.377627 | orchestrator | 2026-03-08 00:50:09.377638 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-08 00:50:09.377650 | orchestrator | Sunday 08 March 2026 00:49:26 +0000 (0:00:00.633) 0:00:00.851 ********** 2026-03-08 00:50:09.377661 | orchestrator | ok: [testbed-manager] 2026-03-08 00:50:09.377672 | orchestrator | 2026-03-08 00:50:09.377721 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-08 00:50:09.377736 | orchestrator | Sunday 08 March 2026 00:49:27 +0000 (0:00:00.945) 0:00:01.797 ********** 2026-03-08 00:50:09.377748 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-08 00:50:09.377760 | orchestrator | 2026-03-08 00:50:09.377773 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-08 00:50:09.377785 | orchestrator | Sunday 08 March 2026 00:49:28 +0000 (0:00:00.779) 0:00:02.576 ********** 2026-03-08 00:50:09.377793 | orchestrator | changed: [testbed-manager] 2026-03-08 00:50:09.377801 | orchestrator | 2026-03-08 00:50:09.377809 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-08 00:50:09.377817 | orchestrator | Sunday 08 March 2026 00:49:29 +0000 (0:00:01.322) 0:00:03.899 ********** 2026-03-08 00:50:09.377825 | orchestrator | changed: [testbed-manager] 2026-03-08 00:50:09.377832 | orchestrator | 2026-03-08 00:50:09.377841 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-08 00:50:09.377848 | orchestrator | Sunday 08 March 2026 00:49:30 +0000 (0:00:00.545) 0:00:04.445 ********** 2026-03-08 00:50:09.377856 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-08 00:50:09.377864 | orchestrator | 2026-03-08 00:50:09.377872 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-08 00:50:09.377879 | orchestrator | Sunday 08 March 2026 00:49:32 +0000 (0:00:01.640) 0:00:06.085 ********** 2026-03-08 00:50:09.377888 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-08 00:50:09.377895 | orchestrator | 2026-03-08 00:50:09.377903 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-08 00:50:09.377911 | orchestrator | Sunday 08 March 2026 00:49:33 +0000 (0:00:00.856) 0:00:06.942 ********** 2026-03-08 00:50:09.377919 | orchestrator | ok: [testbed-manager] 2026-03-08 00:50:09.377927 | orchestrator | 2026-03-08 00:50:09.377935 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-08 00:50:09.377943 | orchestrator | Sunday 08 March 2026 00:49:33 +0000 (0:00:00.438) 0:00:07.380 ********** 2026-03-08 00:50:09.377950 | orchestrator | ok: [testbed-manager] 2026-03-08 00:50:09.377958 | orchestrator | 2026-03-08 00:50:09.377969 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:50:09.377980 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:50:09.377998 | orchestrator | 2026-03-08 00:50:09.378011 | orchestrator | 2026-03-08 00:50:09.378071 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:50:09.378084 | orchestrator | Sunday 08 March 2026 00:49:33 +0000 (0:00:00.371) 0:00:07.752 ********** 2026-03-08 00:50:09.378093 | orchestrator | =============================================================================== 2026-03-08 00:50:09.378101 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.64s 2026-03-08 00:50:09.378109 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.32s 2026-03-08 00:50:09.378116 | orchestrator | Create .kube directory -------------------------------------------------- 0.95s 2026-03-08 00:50:09.378139 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.86s 2026-03-08 00:50:09.378148 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.78s 2026-03-08 00:50:09.378156 | orchestrator | Get home directory of operator user ------------------------------------- 0.63s 2026-03-08 00:50:09.378172 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.55s 2026-03-08 00:50:09.378179 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.44s 2026-03-08 00:50:09.378185 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.37s 2026-03-08 00:50:09.378192 | orchestrator | 2026-03-08 00:50:09.378199 | orchestrator | 2026-03-08 00:50:09.378205 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-08 00:50:09.378212 | orchestrator | 2026-03-08 00:50:09.378219 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-08 00:50:09.378225 | orchestrator | Sunday 08 March 2026 00:47:38 +0000 (0:00:00.124) 0:00:00.124 ********** 2026-03-08 00:50:09.378232 | orchestrator | ok: [localhost] => { 2026-03-08 00:50:09.378239 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-08 00:50:09.378246 | orchestrator | } 2026-03-08 00:50:09.378253 | orchestrator | 2026-03-08 00:50:09.378260 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-08 00:50:09.378266 | orchestrator | Sunday 08 March 2026 00:47:38 +0000 (0:00:00.051) 0:00:00.175 ********** 2026-03-08 00:50:09.378274 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-08 00:50:09.378282 | orchestrator | ...ignoring 2026-03-08 00:50:09.378289 | orchestrator | 2026-03-08 00:50:09.378295 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-08 00:50:09.378302 | orchestrator | Sunday 08 March 2026 00:47:41 +0000 (0:00:02.938) 0:00:03.113 ********** 2026-03-08 00:50:09.378309 | orchestrator | skipping: [localhost] 2026-03-08 00:50:09.378315 | orchestrator | 2026-03-08 00:50:09.378322 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-08 00:50:09.378328 | orchestrator | Sunday 08 March 2026 00:47:41 +0000 (0:00:00.067) 0:00:03.181 ********** 2026-03-08 00:50:09.378335 | orchestrator | ok: [localhost] 2026-03-08 00:50:09.378341 | orchestrator | 2026-03-08 00:50:09.378348 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 00:50:09.378354 | orchestrator | 2026-03-08 00:50:09.378361 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 00:50:09.378368 | orchestrator | Sunday 08 March 2026 00:47:42 +0000 (0:00:00.166) 0:00:03.348 ********** 2026-03-08 00:50:09.378374 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:50:09.378381 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:50:09.378387 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:50:09.378394 | orchestrator | 2026-03-08 00:50:09.378400 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 00:50:09.378407 | orchestrator | Sunday 08 March 2026 00:47:42 +0000 (0:00:00.343) 0:00:03.691 ********** 2026-03-08 00:50:09.378414 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-08 00:50:09.378421 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-08 00:50:09.378428 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-08 00:50:09.378434 | orchestrator | 2026-03-08 00:50:09.378441 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-08 00:50:09.378448 | orchestrator | 2026-03-08 00:50:09.378454 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-08 00:50:09.378461 | orchestrator | Sunday 08 March 2026 00:47:43 +0000 (0:00:00.618) 0:00:04.309 ********** 2026-03-08 00:50:09.378468 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:50:09.378474 | orchestrator | 2026-03-08 00:50:09.378481 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-08 00:50:09.378488 | orchestrator | Sunday 08 March 2026 00:47:43 +0000 (0:00:00.805) 0:00:05.115 ********** 2026-03-08 00:50:09.378494 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:50:09.378501 | orchestrator | 2026-03-08 00:50:09.378512 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-08 00:50:09.378518 | orchestrator | Sunday 08 March 2026 00:47:44 +0000 (0:00:00.964) 0:00:06.079 ********** 2026-03-08 00:50:09.378525 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:50:09.378532 | orchestrator | 2026-03-08 00:50:09.378538 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-08 00:50:09.378545 | orchestrator | Sunday 08 March 2026 00:47:45 +0000 (0:00:00.531) 0:00:06.611 ********** 2026-03-08 00:50:09.378551 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:50:09.378558 | orchestrator | 2026-03-08 00:50:09.378564 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-08 00:50:09.378571 | orchestrator | Sunday 08 March 2026 00:47:45 +0000 (0:00:00.506) 0:00:07.118 ********** 2026-03-08 00:50:09.378577 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:50:09.378584 | orchestrator | 2026-03-08 00:50:09.378591 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-08 00:50:09.378597 | orchestrator | Sunday 08 March 2026 00:47:46 +0000 (0:00:00.737) 0:00:07.855 ********** 2026-03-08 00:50:09.378604 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:50:09.378610 | orchestrator | 2026-03-08 00:50:09.378617 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-08 00:50:09.378623 | orchestrator | Sunday 08 March 2026 00:47:47 +0000 (0:00:01.071) 0:00:08.926 ********** 2026-03-08 00:50:09.378630 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:50:09.378637 | orchestrator | 2026-03-08 00:50:09.378643 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-08 00:50:09.378655 | orchestrator | Sunday 08 March 2026 00:47:48 +0000 (0:00:00.670) 0:00:09.597 ********** 2026-03-08 00:50:09.378662 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:50:09.378668 | orchestrator | 2026-03-08 00:50:09.378675 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-08 00:50:09.378682 | orchestrator | Sunday 08 March 2026 00:47:49 +0000 (0:00:01.234) 0:00:10.831 ********** 2026-03-08 00:50:09.378688 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:50:09.378718 | orchestrator | 2026-03-08 00:50:09.378729 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-08 00:50:09.378740 | orchestrator | Sunday 08 March 2026 00:47:50 +0000 (0:00:00.633) 0:00:11.465 ********** 2026-03-08 00:50:09.378750 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:50:09.378760 | orchestrator | 2026-03-08 00:50:09.378772 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-08 00:50:09.378781 | orchestrator | Sunday 08 March 2026 00:47:51 +0000 (0:00:01.244) 0:00:12.711 ********** 2026-03-08 00:50:09.378856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-08 00:50:09.378880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-08 00:50:09.378895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-08 00:50:09.378902 | orchestrator | 2026-03-08 00:50:09.378909 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-08 00:50:09.378917 | orchestrator | Sunday 08 March 2026 00:47:53 +0000 (0:00:02.089) 0:00:14.801 ********** 2026-03-08 00:50:09.378935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-08 00:50:09.378960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-08 00:50:09.378982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-08 00:50:09.378994 | orchestrator | 2026-03-08 00:50:09.379005 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-08 00:50:09.379016 | orchestrator | Sunday 08 March 2026 00:47:56 +0000 (0:00:02.586) 0:00:17.387 ********** 2026-03-08 00:50:09.379027 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-08 00:50:09.379039 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-08 00:50:09.379051 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-08 00:50:09.379062 | orchestrator | 2026-03-08 00:50:09.379074 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-08 00:50:09.379084 | orchestrator | Sunday 08 March 2026 00:47:58 +0000 (0:00:02.473) 0:00:19.861 ********** 2026-03-08 00:50:09.379094 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-08 00:50:09.379103 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-08 00:50:09.379113 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-08 00:50:09.379124 | orchestrator | 2026-03-08 00:50:09.379138 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-08 00:50:09.379145 | orchestrator | Sunday 08 March 2026 00:48:01 +0000 (0:00:02.543) 0:00:22.404 ********** 2026-03-08 00:50:09.379151 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-08 00:50:09.379158 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-08 00:50:09.379164 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-08 00:50:09.379171 | orchestrator | 2026-03-08 00:50:09.379178 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-08 00:50:09.379184 | orchestrator | Sunday 08 March 2026 00:48:03 +0000 (0:00:02.234) 0:00:24.639 ********** 2026-03-08 00:50:09.379191 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-08 00:50:09.379197 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-08 00:50:09.379204 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-08 00:50:09.379211 | orchestrator | 2026-03-08 00:50:09.379217 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-08 00:50:09.379230 | orchestrator | Sunday 08 March 2026 00:48:06 +0000 (0:00:02.871) 0:00:27.511 ********** 2026-03-08 00:50:09.379236 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-08 00:50:09.379243 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-08 00:50:09.379250 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-08 00:50:09.379256 | orchestrator | 2026-03-08 00:50:09.379263 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-08 00:50:09.379270 | orchestrator | Sunday 08 March 2026 00:48:08 +0000 (0:00:02.183) 0:00:29.694 ********** 2026-03-08 00:50:09.379276 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-08 00:50:09.379283 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-08 00:50:09.379289 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-08 00:50:09.379296 | orchestrator | 2026-03-08 00:50:09.379304 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-08 00:50:09.379315 | orchestrator | Sunday 08 March 2026 00:48:10 +0000 (0:00:02.353) 0:00:32.047 ********** 2026-03-08 00:50:09.379331 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:50:09.379349 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:50:09.379360 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:50:09.379370 | orchestrator | 2026-03-08 00:50:09.379380 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-08 00:50:09.379391 | orchestrator | Sunday 08 March 2026 00:48:11 +0000 (0:00:00.922) 0:00:32.970 ********** 2026-03-08 00:50:09.379403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-08 00:50:09.379426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-08 00:50:09.379446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-08 00:50:09.379457 | orchestrator | 2026-03-08 00:50:09.379468 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-08 00:50:09.379479 | orchestrator | Sunday 08 March 2026 00:48:14 +0000 (0:00:02.656) 0:00:35.626 ********** 2026-03-08 00:50:09.379489 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:50:09.379500 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:50:09.379511 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:50:09.379521 | orchestrator | 2026-03-08 00:50:09.379533 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-08 00:50:09.379544 | orchestrator | Sunday 08 March 2026 00:48:16 +0000 (0:00:02.363) 0:00:37.990 ********** 2026-03-08 00:50:09.379554 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:50:09.379564 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:50:09.379574 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:50:09.379585 | orchestrator | 2026-03-08 00:50:09.379601 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-08 00:50:09.379613 | orchestrator | Sunday 08 March 2026 00:48:25 +0000 (0:00:08.296) 0:00:46.287 ********** 2026-03-08 00:50:09.379623 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:50:09.379634 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:50:09.379645 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:50:09.379656 | orchestrator | 2026-03-08 00:50:09.379667 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-08 00:50:09.379677 | orchestrator | 2026-03-08 00:50:09.379687 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-08 00:50:09.379735 | orchestrator | Sunday 08 March 2026 00:48:25 +0000 (0:00:00.388) 0:00:46.675 ********** 2026-03-08 00:50:09.379748 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:50:09.379759 | orchestrator | 2026-03-08 00:50:09.379770 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-08 00:50:09.379781 | orchestrator | Sunday 08 March 2026 00:48:26 +0000 (0:00:00.697) 0:00:47.373 ********** 2026-03-08 00:50:09.379792 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:50:09.379803 | orchestrator | 2026-03-08 00:50:09.379815 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-08 00:50:09.379826 | orchestrator | Sunday 08 March 2026 00:48:26 +0000 (0:00:00.291) 0:00:47.664 ********** 2026-03-08 00:50:09.379838 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:50:09.379849 | orchestrator | 2026-03-08 00:50:09.379861 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-08 00:50:09.379873 | orchestrator | Sunday 08 March 2026 00:48:33 +0000 (0:00:06.933) 0:00:54.598 ********** 2026-03-08 00:50:09.379885 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:50:09.379896 | orchestrator | 2026-03-08 00:50:09.379907 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-08 00:50:09.379919 | orchestrator | 2026-03-08 00:50:09.379931 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-08 00:50:09.379950 | orchestrator | Sunday 08 March 2026 00:49:24 +0000 (0:00:51.401) 0:01:46.000 ********** 2026-03-08 00:50:09.379961 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:50:09.379971 | orchestrator | 2026-03-08 00:50:09.379982 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-08 00:50:09.379994 | orchestrator | Sunday 08 March 2026 00:49:25 +0000 (0:00:00.827) 0:01:46.827 ********** 2026-03-08 00:50:09.380005 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:50:09.380016 | orchestrator | 2026-03-08 00:50:09.380028 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-08 00:50:09.380038 | orchestrator | Sunday 08 March 2026 00:49:25 +0000 (0:00:00.253) 0:01:47.082 ********** 2026-03-08 00:50:09.380050 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:50:09.380062 | orchestrator | 2026-03-08 00:50:09.380074 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-08 00:50:09.380085 | orchestrator | Sunday 08 March 2026 00:49:28 +0000 (0:00:02.141) 0:01:49.223 ********** 2026-03-08 00:50:09.380096 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:50:09.380107 | orchestrator | 2026-03-08 00:50:09.380119 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-08 00:50:09.380130 | orchestrator | 2026-03-08 00:50:09.380142 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-08 00:50:09.380162 | orchestrator | Sunday 08 March 2026 00:49:44 +0000 (0:00:16.182) 0:02:05.406 ********** 2026-03-08 00:50:09.380175 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:50:09.380186 | orchestrator | 2026-03-08 00:50:09.380197 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-08 00:50:09.380209 | orchestrator | Sunday 08 March 2026 00:49:44 +0000 (0:00:00.621) 0:02:06.028 ********** 2026-03-08 00:50:09.380220 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:50:09.380231 | orchestrator | 2026-03-08 00:50:09.380242 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-08 00:50:09.380254 | orchestrator | Sunday 08 March 2026 00:49:45 +0000 (0:00:00.246) 0:02:06.274 ********** 2026-03-08 00:50:09.380265 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:50:09.380277 | orchestrator | 2026-03-08 00:50:09.380289 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-08 00:50:09.380300 | orchestrator | Sunday 08 March 2026 00:49:46 +0000 (0:00:01.663) 0:02:07.938 ********** 2026-03-08 00:50:09.380311 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:50:09.380323 | orchestrator | 2026-03-08 00:50:09.380335 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-08 00:50:09.380346 | orchestrator | 2026-03-08 00:50:09.380357 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-08 00:50:09.380368 | orchestrator | Sunday 08 March 2026 00:50:03 +0000 (0:00:16.423) 0:02:24.361 ********** 2026-03-08 00:50:09.380379 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:50:09.380391 | orchestrator | 2026-03-08 00:50:09.380403 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-08 00:50:09.380415 | orchestrator | Sunday 08 March 2026 00:50:03 +0000 (0:00:00.585) 0:02:24.947 ********** 2026-03-08 00:50:09.380427 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-08 00:50:09.380438 | orchestrator | enable_outward_rabbitmq_True 2026-03-08 00:50:09.380449 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-08 00:50:09.380461 | orchestrator | outward_rabbitmq_restart 2026-03-08 00:50:09.380472 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:50:09.380483 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:50:09.380495 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:50:09.380507 | orchestrator | 2026-03-08 00:50:09.380519 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-08 00:50:09.380530 | orchestrator | skipping: no hosts matched 2026-03-08 00:50:09.380542 | orchestrator | 2026-03-08 00:50:09.380552 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-08 00:50:09.380566 | orchestrator | skipping: no hosts matched 2026-03-08 00:50:09.380573 | orchestrator | 2026-03-08 00:50:09.380579 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-08 00:50:09.380586 | orchestrator | skipping: no hosts matched 2026-03-08 00:50:09.380593 | orchestrator | 2026-03-08 00:50:09.380600 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:50:09.380614 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-08 00:50:09.380621 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-08 00:50:09.380628 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:50:09.380635 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:50:09.380642 | orchestrator | 2026-03-08 00:50:09.380649 | orchestrator | 2026-03-08 00:50:09.380656 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:50:09.380663 | orchestrator | Sunday 08 March 2026 00:50:06 +0000 (0:00:02.749) 0:02:27.696 ********** 2026-03-08 00:50:09.380669 | orchestrator | =============================================================================== 2026-03-08 00:50:09.380676 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 84.01s 2026-03-08 00:50:09.380683 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.74s 2026-03-08 00:50:09.380689 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.30s 2026-03-08 00:50:09.380723 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.94s 2026-03-08 00:50:09.380730 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.87s 2026-03-08 00:50:09.380737 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.75s 2026-03-08 00:50:09.380744 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.66s 2026-03-08 00:50:09.380750 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.59s 2026-03-08 00:50:09.380757 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.54s 2026-03-08 00:50:09.380763 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.47s 2026-03-08 00:50:09.380770 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 2.36s 2026-03-08 00:50:09.380777 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.35s 2026-03-08 00:50:09.380783 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.24s 2026-03-08 00:50:09.380793 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.18s 2026-03-08 00:50:09.380804 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.15s 2026-03-08 00:50:09.380823 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 2.09s 2026-03-08 00:50:09.380834 | orchestrator | rabbitmq : Remove ha-all policy from RabbitMQ --------------------------- 1.25s 2026-03-08 00:50:09.380845 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.23s 2026-03-08 00:50:09.380857 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 1.07s 2026-03-08 00:50:09.380869 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.96s 2026-03-08 00:50:09.380881 | orchestrator | 2026-03-08 00:50:09 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:12.419274 | orchestrator | 2026-03-08 00:50:12 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:50:12.420155 | orchestrator | 2026-03-08 00:50:12 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:50:12.420867 | orchestrator | 2026-03-08 00:50:12 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:50:12.420907 | orchestrator | 2026-03-08 00:50:12 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:15.454624 | orchestrator | 2026-03-08 00:50:15 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:50:15.455726 | orchestrator | 2026-03-08 00:50:15 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:50:15.457079 | orchestrator | 2026-03-08 00:50:15 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:50:15.457133 | orchestrator | 2026-03-08 00:50:15 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:18.504871 | orchestrator | 2026-03-08 00:50:18 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:50:18.504955 | orchestrator | 2026-03-08 00:50:18 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:50:18.504965 | orchestrator | 2026-03-08 00:50:18 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:50:18.504972 | orchestrator | 2026-03-08 00:50:18 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:21.576849 | orchestrator | 2026-03-08 00:50:21 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:50:21.578454 | orchestrator | 2026-03-08 00:50:21 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:50:21.580287 | orchestrator | 2026-03-08 00:50:21 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:50:21.580366 | orchestrator | 2026-03-08 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:24.663759 | orchestrator | 2026-03-08 00:50:24 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:50:24.663812 | orchestrator | 2026-03-08 00:50:24 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:50:24.665360 | orchestrator | 2026-03-08 00:50:24 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:50:24.665420 | orchestrator | 2026-03-08 00:50:24 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:27.703008 | orchestrator | 2026-03-08 00:50:27 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:50:27.703354 | orchestrator | 2026-03-08 00:50:27 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:50:27.704779 | orchestrator | 2026-03-08 00:50:27 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:50:27.704807 | orchestrator | 2026-03-08 00:50:27 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:30.741293 | orchestrator | 2026-03-08 00:50:30 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:50:30.741847 | orchestrator | 2026-03-08 00:50:30 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:50:30.744122 | orchestrator | 2026-03-08 00:50:30 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:50:30.744158 | orchestrator | 2026-03-08 00:50:30 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:33.780901 | orchestrator | 2026-03-08 00:50:33 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:50:33.785818 | orchestrator | 2026-03-08 00:50:33 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:50:33.787291 | orchestrator | 2026-03-08 00:50:33 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:50:33.787333 | orchestrator | 2026-03-08 00:50:33 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:36.827506 | orchestrator | 2026-03-08 00:50:36 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:50:36.829528 | orchestrator | 2026-03-08 00:50:36 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:50:36.831946 | orchestrator | 2026-03-08 00:50:36 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:50:36.832028 | orchestrator | 2026-03-08 00:50:36 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:39.872778 | orchestrator | 2026-03-08 00:50:39 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:50:39.875072 | orchestrator | 2026-03-08 00:50:39 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:50:39.877498 | orchestrator | 2026-03-08 00:50:39 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:50:39.878509 | orchestrator | 2026-03-08 00:50:39 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:42.920073 | orchestrator | 2026-03-08 00:50:42 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:50:42.920872 | orchestrator | 2026-03-08 00:50:42 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:50:42.922124 | orchestrator | 2026-03-08 00:50:42 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:50:42.922174 | orchestrator | 2026-03-08 00:50:42 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:45.971463 | orchestrator | 2026-03-08 00:50:45 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:50:45.973765 | orchestrator | 2026-03-08 00:50:45 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:50:45.975711 | orchestrator | 2026-03-08 00:50:45 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:50:45.975787 | orchestrator | 2026-03-08 00:50:45 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:49.032458 | orchestrator | 2026-03-08 00:50:49 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:50:49.033772 | orchestrator | 2026-03-08 00:50:49 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:50:49.035001 | orchestrator | 2026-03-08 00:50:49 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:50:49.035353 | orchestrator | 2026-03-08 00:50:49 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:52.089336 | orchestrator | 2026-03-08 00:50:52 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:50:52.089991 | orchestrator | 2026-03-08 00:50:52 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:50:52.091039 | orchestrator | 2026-03-08 00:50:52 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:50:52.091179 | orchestrator | 2026-03-08 00:50:52 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:55.130272 | orchestrator | 2026-03-08 00:50:55 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:50:55.131801 | orchestrator | 2026-03-08 00:50:55 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:50:55.133400 | orchestrator | 2026-03-08 00:50:55 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:50:55.133445 | orchestrator | 2026-03-08 00:50:55 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:58.173027 | orchestrator | 2026-03-08 00:50:58 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:50:58.174853 | orchestrator | 2026-03-08 00:50:58 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:50:58.177090 | orchestrator | 2026-03-08 00:50:58 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:50:58.177170 | orchestrator | 2026-03-08 00:50:58 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:01.215824 | orchestrator | 2026-03-08 00:51:01 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state STARTED 2026-03-08 00:51:01.215919 | orchestrator | 2026-03-08 00:51:01 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:51:01.217356 | orchestrator | 2026-03-08 00:51:01 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:51:01.217397 | orchestrator | 2026-03-08 00:51:01 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:04.282945 | orchestrator | 2026-03-08 00:51:04 | INFO  | Task bbedad74-0efc-4d01-b7ca-c92d21fb829e is in state SUCCESS 2026-03-08 00:51:04.284368 | orchestrator | 2026-03-08 00:51:04.284424 | orchestrator | 2026-03-08 00:51:04.284439 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 00:51:04.284451 | orchestrator | 2026-03-08 00:51:04.284463 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 00:51:04.284474 | orchestrator | Sunday 08 March 2026 00:48:29 +0000 (0:00:00.387) 0:00:00.387 ********** 2026-03-08 00:51:04.284486 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:51:04.284498 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:51:04.284509 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:51:04.284520 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:51:04.284531 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:51:04.284542 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:51:04.284552 | orchestrator | 2026-03-08 00:51:04.284632 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 00:51:04.284645 | orchestrator | Sunday 08 March 2026 00:48:30 +0000 (0:00:00.891) 0:00:01.278 ********** 2026-03-08 00:51:04.284656 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-08 00:51:04.284667 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-08 00:51:04.284679 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-08 00:51:04.284689 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-08 00:51:04.284701 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-08 00:51:04.284712 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-08 00:51:04.284722 | orchestrator | 2026-03-08 00:51:04.284734 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-08 00:51:04.284745 | orchestrator | 2026-03-08 00:51:04.284756 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-08 00:51:04.284767 | orchestrator | Sunday 08 March 2026 00:48:31 +0000 (0:00:01.361) 0:00:02.639 ********** 2026-03-08 00:51:04.284779 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:51:04.284792 | orchestrator | 2026-03-08 00:51:04.284803 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-08 00:51:04.284814 | orchestrator | Sunday 08 March 2026 00:48:32 +0000 (0:00:01.178) 0:00:03.818 ********** 2026-03-08 00:51:04.284844 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285308 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285347 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285436 | orchestrator | 2026-03-08 00:51:04.285448 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-08 00:51:04.285459 | orchestrator | Sunday 08 March 2026 00:48:34 +0000 (0:00:01.601) 0:00:05.419 ********** 2026-03-08 00:51:04.285470 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285482 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285493 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285586 | orchestrator | 2026-03-08 00:51:04.285598 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-08 00:51:04.285609 | orchestrator | Sunday 08 March 2026 00:48:36 +0000 (0:00:02.045) 0:00:07.464 ********** 2026-03-08 00:51:04.285621 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285632 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285653 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285705 | orchestrator | 2026-03-08 00:51:04.285716 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-08 00:51:04.285727 | orchestrator | Sunday 08 March 2026 00:48:37 +0000 (0:00:01.481) 0:00:08.946 ********** 2026-03-08 00:51:04.285743 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285755 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285766 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285817 | orchestrator | 2026-03-08 00:51:04.285828 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-08 00:51:04.285839 | orchestrator | Sunday 08 March 2026 00:48:39 +0000 (0:00:01.863) 0:00:10.809 ********** 2026-03-08 00:51:04.285850 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285868 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285884 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.285963 | orchestrator | 2026-03-08 00:51:04.285976 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-08 00:51:04.285989 | orchestrator | Sunday 08 March 2026 00:48:41 +0000 (0:00:02.176) 0:00:12.985 ********** 2026-03-08 00:51:04.286002 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:51:04.286070 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:51:04.286087 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:51:04.286101 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:51:04.286115 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:51:04.286127 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:51:04.286139 | orchestrator | 2026-03-08 00:51:04.286153 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-08 00:51:04.286166 | orchestrator | Sunday 08 March 2026 00:48:44 +0000 (0:00:03.021) 0:00:16.007 ********** 2026-03-08 00:51:04.286178 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-08 00:51:04.286191 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-08 00:51:04.286203 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-08 00:51:04.286223 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-08 00:51:04.286247 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-08 00:51:04.286260 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-08 00:51:04.286270 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-08 00:51:04.286281 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-08 00:51:04.286292 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-08 00:51:04.286303 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-08 00:51:04.286314 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-08 00:51:04.286325 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-08 00:51:04.286337 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-08 00:51:04.286349 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-08 00:51:04.286360 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-08 00:51:04.286371 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-08 00:51:04.286382 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-08 00:51:04.286393 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-08 00:51:04.286405 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-08 00:51:04.286420 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-08 00:51:04.286432 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-08 00:51:04.286442 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-08 00:51:04.286453 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-08 00:51:04.286464 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-08 00:51:04.286474 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-08 00:51:04.286485 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-08 00:51:04.286496 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-08 00:51:04.286507 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-08 00:51:04.286517 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-08 00:51:04.286528 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-08 00:51:04.286539 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-08 00:51:04.286550 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-08 00:51:04.286581 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-08 00:51:04.286602 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-08 00:51:04.286633 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-08 00:51:04.286651 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-08 00:51:04.286680 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-08 00:51:04.286699 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-08 00:51:04.286716 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-08 00:51:04.286733 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-08 00:51:04.286760 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-08 00:51:04.286779 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-08 00:51:04.286799 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-08 00:51:04.286814 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-08 00:51:04.286831 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-08 00:51:04.286848 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-08 00:51:04.286866 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-08 00:51:04.286883 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-08 00:51:04.286902 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-08 00:51:04.286920 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-08 00:51:04.286938 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-08 00:51:04.286955 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-08 00:51:04.286973 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-08 00:51:04.286992 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-08 00:51:04.287012 | orchestrator | 2026-03-08 00:51:04.287031 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-08 00:51:04.287059 | orchestrator | Sunday 08 March 2026 00:49:05 +0000 (0:00:20.568) 0:00:36.576 ********** 2026-03-08 00:51:04.287079 | orchestrator | 2026-03-08 00:51:04.287094 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-08 00:51:04.287105 | orchestrator | Sunday 08 March 2026 00:49:05 +0000 (0:00:00.062) 0:00:36.638 ********** 2026-03-08 00:51:04.287116 | orchestrator | 2026-03-08 00:51:04.287127 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-08 00:51:04.287137 | orchestrator | Sunday 08 March 2026 00:49:05 +0000 (0:00:00.061) 0:00:36.700 ********** 2026-03-08 00:51:04.287148 | orchestrator | 2026-03-08 00:51:04.287159 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-08 00:51:04.287170 | orchestrator | Sunday 08 March 2026 00:49:05 +0000 (0:00:00.059) 0:00:36.760 ********** 2026-03-08 00:51:04.287191 | orchestrator | 2026-03-08 00:51:04.287202 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-08 00:51:04.287212 | orchestrator | Sunday 08 March 2026 00:49:05 +0000 (0:00:00.060) 0:00:36.821 ********** 2026-03-08 00:51:04.287223 | orchestrator | 2026-03-08 00:51:04.287234 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-08 00:51:04.287245 | orchestrator | Sunday 08 March 2026 00:49:05 +0000 (0:00:00.108) 0:00:36.930 ********** 2026-03-08 00:51:04.287256 | orchestrator | 2026-03-08 00:51:04.287267 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-08 00:51:04.287278 | orchestrator | Sunday 08 March 2026 00:49:05 +0000 (0:00:00.061) 0:00:36.991 ********** 2026-03-08 00:51:04.287288 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:51:04.287300 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:51:04.287310 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:51:04.287321 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:51:04.287332 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:51:04.287343 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:51:04.287353 | orchestrator | 2026-03-08 00:51:04.287364 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-08 00:51:04.287375 | orchestrator | Sunday 08 March 2026 00:49:08 +0000 (0:00:02.220) 0:00:39.212 ********** 2026-03-08 00:51:04.287386 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:51:04.287397 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:51:04.287408 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:51:04.287419 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:51:04.287430 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:51:04.287440 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:51:04.287451 | orchestrator | 2026-03-08 00:51:04.287462 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-08 00:51:04.287474 | orchestrator | 2026-03-08 00:51:04.287485 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-08 00:51:04.287496 | orchestrator | Sunday 08 March 2026 00:49:42 +0000 (0:00:34.743) 0:01:13.956 ********** 2026-03-08 00:51:04.287507 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:51:04.287518 | orchestrator | 2026-03-08 00:51:04.287529 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-08 00:51:04.287540 | orchestrator | Sunday 08 March 2026 00:49:43 +0000 (0:00:00.777) 0:01:14.733 ********** 2026-03-08 00:51:04.287551 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:51:04.287633 | orchestrator | 2026-03-08 00:51:04.287656 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-08 00:51:04.287668 | orchestrator | Sunday 08 March 2026 00:49:44 +0000 (0:00:00.633) 0:01:15.366 ********** 2026-03-08 00:51:04.287678 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:51:04.287689 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:51:04.287700 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:51:04.287711 | orchestrator | 2026-03-08 00:51:04.287722 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-08 00:51:04.287733 | orchestrator | Sunday 08 March 2026 00:49:45 +0000 (0:00:00.971) 0:01:16.338 ********** 2026-03-08 00:51:04.287743 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:51:04.287754 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:51:04.287765 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:51:04.287775 | orchestrator | 2026-03-08 00:51:04.287787 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-08 00:51:04.287798 | orchestrator | Sunday 08 March 2026 00:49:45 +0000 (0:00:00.376) 0:01:16.715 ********** 2026-03-08 00:51:04.287808 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:51:04.287819 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:51:04.287830 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:51:04.287841 | orchestrator | 2026-03-08 00:51:04.287870 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-08 00:51:04.287881 | orchestrator | Sunday 08 March 2026 00:49:45 +0000 (0:00:00.309) 0:01:17.024 ********** 2026-03-08 00:51:04.287892 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:51:04.287903 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:51:04.287913 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:51:04.287924 | orchestrator | 2026-03-08 00:51:04.287935 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-08 00:51:04.287946 | orchestrator | Sunday 08 March 2026 00:49:46 +0000 (0:00:00.315) 0:01:17.340 ********** 2026-03-08 00:51:04.287957 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:51:04.287967 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:51:04.287978 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:51:04.287989 | orchestrator | 2026-03-08 00:51:04.287999 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-08 00:51:04.288010 | orchestrator | Sunday 08 March 2026 00:49:46 +0000 (0:00:00.560) 0:01:17.900 ********** 2026-03-08 00:51:04.288021 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:51:04.288032 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:51:04.288042 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:51:04.288053 | orchestrator | 2026-03-08 00:51:04.288064 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-08 00:51:04.288075 | orchestrator | Sunday 08 March 2026 00:49:47 +0000 (0:00:00.299) 0:01:18.200 ********** 2026-03-08 00:51:04.288084 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:51:04.288094 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:51:04.288103 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:51:04.288113 | orchestrator | 2026-03-08 00:51:04.288128 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-08 00:51:04.288138 | orchestrator | Sunday 08 March 2026 00:49:47 +0000 (0:00:00.306) 0:01:18.506 ********** 2026-03-08 00:51:04.288147 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:51:04.288157 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:51:04.288166 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:51:04.288176 | orchestrator | 2026-03-08 00:51:04.288185 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-08 00:51:04.288195 | orchestrator | Sunday 08 March 2026 00:49:47 +0000 (0:00:00.311) 0:01:18.818 ********** 2026-03-08 00:51:04.288204 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:51:04.288214 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:51:04.288223 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:51:04.288232 | orchestrator | 2026-03-08 00:51:04.288242 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-08 00:51:04.288251 | orchestrator | Sunday 08 March 2026 00:49:48 +0000 (0:00:00.475) 0:01:19.293 ********** 2026-03-08 00:51:04.288261 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:51:04.288270 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:51:04.288280 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:51:04.288289 | orchestrator | 2026-03-08 00:51:04.288299 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-08 00:51:04.288308 | orchestrator | Sunday 08 March 2026 00:49:48 +0000 (0:00:00.325) 0:01:19.619 ********** 2026-03-08 00:51:04.288318 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:51:04.288327 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:51:04.288337 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:51:04.288346 | orchestrator | 2026-03-08 00:51:04.288356 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-08 00:51:04.288366 | orchestrator | Sunday 08 March 2026 00:49:48 +0000 (0:00:00.297) 0:01:19.916 ********** 2026-03-08 00:51:04.288375 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:51:04.288385 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:51:04.288394 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:51:04.288404 | orchestrator | 2026-03-08 00:51:04.288413 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-08 00:51:04.288430 | orchestrator | Sunday 08 March 2026 00:49:49 +0000 (0:00:00.300) 0:01:20.217 ********** 2026-03-08 00:51:04.288439 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:51:04.288449 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:51:04.288458 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:51:04.288468 | orchestrator | 2026-03-08 00:51:04.288477 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-08 00:51:04.288487 | orchestrator | Sunday 08 March 2026 00:49:49 +0000 (0:00:00.494) 0:01:20.711 ********** 2026-03-08 00:51:04.288497 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:51:04.288506 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:51:04.288516 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:51:04.288525 | orchestrator | 2026-03-08 00:51:04.288535 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-08 00:51:04.288544 | orchestrator | Sunday 08 March 2026 00:49:49 +0000 (0:00:00.326) 0:01:21.037 ********** 2026-03-08 00:51:04.288554 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:51:04.288586 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:51:04.288596 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:51:04.288606 | orchestrator | 2026-03-08 00:51:04.288621 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-08 00:51:04.288631 | orchestrator | Sunday 08 March 2026 00:49:50 +0000 (0:00:00.388) 0:01:21.426 ********** 2026-03-08 00:51:04.288640 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:51:04.288650 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:51:04.288659 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:51:04.288669 | orchestrator | 2026-03-08 00:51:04.288678 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-08 00:51:04.288688 | orchestrator | Sunday 08 March 2026 00:49:50 +0000 (0:00:00.347) 0:01:21.774 ********** 2026-03-08 00:51:04.288697 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:51:04.288707 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:51:04.288717 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:51:04.288726 | orchestrator | 2026-03-08 00:51:04.288736 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-08 00:51:04.288745 | orchestrator | Sunday 08 March 2026 00:49:51 +0000 (0:00:00.330) 0:01:22.104 ********** 2026-03-08 00:51:04.288755 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:51:04.288765 | orchestrator | 2026-03-08 00:51:04.288775 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-08 00:51:04.288784 | orchestrator | Sunday 08 March 2026 00:49:51 +0000 (0:00:00.845) 0:01:22.950 ********** 2026-03-08 00:51:04.288794 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:51:04.288803 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:51:04.288813 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:51:04.288822 | orchestrator | 2026-03-08 00:51:04.288832 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-08 00:51:04.288842 | orchestrator | Sunday 08 March 2026 00:49:52 +0000 (0:00:00.422) 0:01:23.372 ********** 2026-03-08 00:51:04.288851 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:51:04.288861 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:51:04.288870 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:51:04.288880 | orchestrator | 2026-03-08 00:51:04.288890 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-08 00:51:04.288899 | orchestrator | Sunday 08 March 2026 00:49:52 +0000 (0:00:00.539) 0:01:23.912 ********** 2026-03-08 00:51:04.288909 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:51:04.288919 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:51:04.288928 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:51:04.288938 | orchestrator | 2026-03-08 00:51:04.288948 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-08 00:51:04.288957 | orchestrator | Sunday 08 March 2026 00:49:53 +0000 (0:00:00.553) 0:01:24.465 ********** 2026-03-08 00:51:04.288974 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:51:04.288984 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:51:04.288994 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:51:04.289003 | orchestrator | 2026-03-08 00:51:04.289013 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-08 00:51:04.289023 | orchestrator | Sunday 08 March 2026 00:49:53 +0000 (0:00:00.330) 0:01:24.796 ********** 2026-03-08 00:51:04.289032 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:51:04.289048 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:51:04.289065 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:51:04.289095 | orchestrator | 2026-03-08 00:51:04.289113 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-08 00:51:04.289129 | orchestrator | Sunday 08 March 2026 00:49:54 +0000 (0:00:00.395) 0:01:25.191 ********** 2026-03-08 00:51:04.289146 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:51:04.289162 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:51:04.289178 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:51:04.289194 | orchestrator | 2026-03-08 00:51:04.289212 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-08 00:51:04.289228 | orchestrator | Sunday 08 March 2026 00:49:54 +0000 (0:00:00.324) 0:01:25.516 ********** 2026-03-08 00:51:04.289246 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:51:04.289262 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:51:04.289272 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:51:04.289281 | orchestrator | 2026-03-08 00:51:04.289291 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-08 00:51:04.289301 | orchestrator | Sunday 08 March 2026 00:49:55 +0000 (0:00:00.586) 0:01:26.102 ********** 2026-03-08 00:51:04.289310 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:51:04.289320 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:51:04.289329 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:51:04.289339 | orchestrator | 2026-03-08 00:51:04.289382 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-08 00:51:04.289393 | orchestrator | Sunday 08 March 2026 00:49:55 +0000 (0:00:00.407) 0:01:26.509 ********** 2026-03-08 00:51:04.289404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.289416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.289436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/k2026-03-08 00:51:04 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:51:04.289448 | orchestrator | 2026-03-08 00:51:04 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:51:04.289458 | orchestrator | 2026-03-08 00:51:04 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:04.289469 | orchestrator | olla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.289480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.289501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.289512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.289527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.289537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.289547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.289556 | orchestrator | 2026-03-08 00:51:04.289595 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-08 00:51:04.289606 | orchestrator | Sunday 08 March 2026 00:49:56 +0000 (0:00:01.449) 0:01:27.958 ********** 2026-03-08 00:51:04.289616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.289626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.289642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.289653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.289670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.289680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.289694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.289704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.289714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.289724 | orchestrator | 2026-03-08 00:51:04.289734 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-08 00:51:04.289744 | orchestrator | Sunday 08 March 2026 00:50:00 +0000 (0:00:03.881) 0:01:31.840 ********** 2026-03-08 00:51:04.289754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.289764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.289781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.289801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.289811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.289822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.289832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.289846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.289856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.289866 | orchestrator | 2026-03-08 00:51:04.289876 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-08 00:51:04.289886 | orchestrator | Sunday 08 March 2026 00:50:03 +0000 (0:00:02.488) 0:01:34.329 ********** 2026-03-08 00:51:04.289896 | orchestrator | 2026-03-08 00:51:04.289906 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-08 00:51:04.289916 | orchestrator | Sunday 08 March 2026 00:50:03 +0000 (0:00:00.136) 0:01:34.465 ********** 2026-03-08 00:51:04.289925 | orchestrator | 2026-03-08 00:51:04.289935 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-08 00:51:04.289945 | orchestrator | Sunday 08 March 2026 00:50:03 +0000 (0:00:00.079) 0:01:34.545 ********** 2026-03-08 00:51:04.289954 | orchestrator | 2026-03-08 00:51:04.289964 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-08 00:51:04.289974 | orchestrator | Sunday 08 March 2026 00:50:03 +0000 (0:00:00.146) 0:01:34.692 ********** 2026-03-08 00:51:04.289983 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:51:04.289993 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:51:04.290003 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:51:04.290063 | orchestrator | 2026-03-08 00:51:04.290076 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-08 00:51:04.290092 | orchestrator | Sunday 08 March 2026 00:50:06 +0000 (0:00:02.982) 0:01:37.674 ********** 2026-03-08 00:51:04.290102 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:51:04.290111 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:51:04.290121 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:51:04.290130 | orchestrator | 2026-03-08 00:51:04.290140 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-08 00:51:04.290150 | orchestrator | Sunday 08 March 2026 00:50:14 +0000 (0:00:07.709) 0:01:45.384 ********** 2026-03-08 00:51:04.290159 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:51:04.290169 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:51:04.290178 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:51:04.290188 | orchestrator | 2026-03-08 00:51:04.290197 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-08 00:51:04.290207 | orchestrator | Sunday 08 March 2026 00:50:22 +0000 (0:00:08.296) 0:01:53.681 ********** 2026-03-08 00:51:04.290223 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:51:04.290232 | orchestrator | 2026-03-08 00:51:04.290242 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-08 00:51:04.290252 | orchestrator | Sunday 08 March 2026 00:50:22 +0000 (0:00:00.164) 0:01:53.845 ********** 2026-03-08 00:51:04.290261 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:51:04.290271 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:51:04.290280 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:51:04.290290 | orchestrator | 2026-03-08 00:51:04.290300 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-08 00:51:04.290309 | orchestrator | Sunday 08 March 2026 00:50:23 +0000 (0:00:00.954) 0:01:54.800 ********** 2026-03-08 00:51:04.290319 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:51:04.290329 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:51:04.290338 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:51:04.290348 | orchestrator | 2026-03-08 00:51:04.290357 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-08 00:51:04.290367 | orchestrator | Sunday 08 March 2026 00:50:24 +0000 (0:00:00.872) 0:01:55.673 ********** 2026-03-08 00:51:04.290377 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:51:04.290387 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:51:04.290396 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:51:04.290406 | orchestrator | 2026-03-08 00:51:04.290416 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-08 00:51:04.290425 | orchestrator | Sunday 08 March 2026 00:50:25 +0000 (0:00:00.887) 0:01:56.561 ********** 2026-03-08 00:51:04.290435 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:51:04.290444 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:51:04.290454 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:51:04.290463 | orchestrator | 2026-03-08 00:51:04.290473 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-08 00:51:04.290483 | orchestrator | Sunday 08 March 2026 00:50:26 +0000 (0:00:00.967) 0:01:57.529 ********** 2026-03-08 00:51:04.290492 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:51:04.290502 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:51:04.290511 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:51:04.290521 | orchestrator | 2026-03-08 00:51:04.290531 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-08 00:51:04.290540 | orchestrator | Sunday 08 March 2026 00:50:27 +0000 (0:00:01.099) 0:01:58.628 ********** 2026-03-08 00:51:04.290550 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:51:04.290579 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:51:04.290595 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:51:04.290609 | orchestrator | 2026-03-08 00:51:04.290624 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-08 00:51:04.290636 | orchestrator | Sunday 08 March 2026 00:50:28 +0000 (0:00:00.914) 0:01:59.542 ********** 2026-03-08 00:51:04.290646 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:51:04.290656 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:51:04.290671 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:51:04.290681 | orchestrator | 2026-03-08 00:51:04.290695 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-08 00:51:04.290705 | orchestrator | Sunday 08 March 2026 00:50:28 +0000 (0:00:00.329) 0:01:59.871 ********** 2026-03-08 00:51:04.290715 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.290725 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.290736 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.290746 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.290756 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.290773 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.290783 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.290793 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.290803 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.290819 | orchestrator | 2026-03-08 00:51:04.290830 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-08 00:51:04.290844 | orchestrator | Sunday 08 March 2026 00:50:30 +0000 (0:00:01.605) 0:02:01.477 ********** 2026-03-08 00:51:04.290871 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.290887 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.290901 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.290918 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.290935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.290960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.290978 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.290995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.291007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.291024 | orchestrator | 2026-03-08 00:51:04.291034 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-08 00:51:04.291044 | orchestrator | Sunday 08 March 2026 00:50:34 +0000 (0:00:04.252) 0:02:05.730 ********** 2026-03-08 00:51:04.291054 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.291069 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.291079 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.291089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.291099 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.291109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.291127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.291137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.291147 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:51:04.291164 | orchestrator | 2026-03-08 00:51:04.291174 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-08 00:51:04.291184 | orchestrator | Sunday 08 March 2026 00:50:37 +0000 (0:00:03.086) 0:02:08.816 ********** 2026-03-08 00:51:04.291198 | orchestrator | 2026-03-08 00:51:04.291220 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-08 00:51:04.291242 | orchestrator | Sunday 08 March 2026 00:50:37 +0000 (0:00:00.102) 0:02:08.919 ********** 2026-03-08 00:51:04.291258 | orchestrator | 2026-03-08 00:51:04.291273 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-08 00:51:04.291289 | orchestrator | Sunday 08 March 2026 00:50:37 +0000 (0:00:00.065) 0:02:08.984 ********** 2026-03-08 00:51:04.291302 | orchestrator | 2026-03-08 00:51:04.291318 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-08 00:51:04.291333 | orchestrator | Sunday 08 March 2026 00:50:38 +0000 (0:00:00.072) 0:02:09.057 ********** 2026-03-08 00:51:04.291350 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:51:04.291366 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:51:04.291382 | orchestrator | 2026-03-08 00:51:04.291398 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-08 00:51:04.291418 | orchestrator | Sunday 08 March 2026 00:50:44 +0000 (0:00:06.258) 0:02:15.316 ********** 2026-03-08 00:51:04.291428 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:51:04.291438 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:51:04.291448 | orchestrator | 2026-03-08 00:51:04.291457 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-08 00:51:04.291467 | orchestrator | Sunday 08 March 2026 00:50:50 +0000 (0:00:06.433) 0:02:21.750 ********** 2026-03-08 00:51:04.291476 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:51:04.291486 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:51:04.291495 | orchestrator | 2026-03-08 00:51:04.291505 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-08 00:51:04.291515 | orchestrator | Sunday 08 March 2026 00:50:57 +0000 (0:00:07.046) 0:02:28.796 ********** 2026-03-08 00:51:04.291524 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:51:04.291534 | orchestrator | 2026-03-08 00:51:04.291544 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-08 00:51:04.291553 | orchestrator | Sunday 08 March 2026 00:50:57 +0000 (0:00:00.143) 0:02:28.939 ********** 2026-03-08 00:51:04.291601 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:51:04.291612 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:51:04.291621 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:51:04.291631 | orchestrator | 2026-03-08 00:51:04.291641 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-08 00:51:04.291650 | orchestrator | Sunday 08 March 2026 00:50:58 +0000 (0:00:00.861) 0:02:29.801 ********** 2026-03-08 00:51:04.291660 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:51:04.291669 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:51:04.291679 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:51:04.291688 | orchestrator | 2026-03-08 00:51:04.291698 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-08 00:51:04.291707 | orchestrator | Sunday 08 March 2026 00:50:59 +0000 (0:00:00.698) 0:02:30.499 ********** 2026-03-08 00:51:04.291717 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:51:04.291726 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:51:04.291736 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:51:04.291746 | orchestrator | 2026-03-08 00:51:04.291755 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-08 00:51:04.291765 | orchestrator | Sunday 08 March 2026 00:51:00 +0000 (0:00:00.822) 0:02:31.321 ********** 2026-03-08 00:51:04.291774 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:51:04.291793 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:51:04.291803 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:51:04.291813 | orchestrator | 2026-03-08 00:51:04.291823 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-08 00:51:04.291832 | orchestrator | Sunday 08 March 2026 00:51:00 +0000 (0:00:00.654) 0:02:31.976 ********** 2026-03-08 00:51:04.291842 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:51:04.291851 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:51:04.291861 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:51:04.291870 | orchestrator | 2026-03-08 00:51:04.291880 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-08 00:51:04.291890 | orchestrator | Sunday 08 March 2026 00:51:01 +0000 (0:00:00.933) 0:02:32.910 ********** 2026-03-08 00:51:04.291899 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:51:04.291909 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:51:04.291918 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:51:04.291928 | orchestrator | 2026-03-08 00:51:04.291937 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:51:04.291955 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-08 00:51:04.291966 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-08 00:51:04.291976 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-08 00:51:04.291986 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:51:04.291996 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:51:04.292009 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:51:04.292026 | orchestrator | 2026-03-08 00:51:04.292042 | orchestrator | 2026-03-08 00:51:04.292065 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:51:04.292085 | orchestrator | Sunday 08 March 2026 00:51:02 +0000 (0:00:00.951) 0:02:33.861 ********** 2026-03-08 00:51:04.292100 | orchestrator | =============================================================================== 2026-03-08 00:51:04.292116 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 34.74s 2026-03-08 00:51:04.292132 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.57s 2026-03-08 00:51:04.292146 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 15.34s 2026-03-08 00:51:04.292162 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.14s 2026-03-08 00:51:04.292177 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 9.24s 2026-03-08 00:51:04.292194 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.25s 2026-03-08 00:51:04.292210 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.88s 2026-03-08 00:51:04.292226 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.09s 2026-03-08 00:51:04.292251 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.02s 2026-03-08 00:51:04.292267 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.49s 2026-03-08 00:51:04.292284 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.23s 2026-03-08 00:51:04.292299 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.18s 2026-03-08 00:51:04.292316 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.05s 2026-03-08 00:51:04.292327 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.86s 2026-03-08 00:51:04.292346 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.61s 2026-03-08 00:51:04.292355 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.60s 2026-03-08 00:51:04.292365 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.48s 2026-03-08 00:51:04.292374 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.45s 2026-03-08 00:51:04.292384 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.36s 2026-03-08 00:51:04.292394 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.18s 2026-03-08 00:51:07.331144 | orchestrator | 2026-03-08 00:51:07 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:51:07.333437 | orchestrator | 2026-03-08 00:51:07 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:51:07.333518 | orchestrator | 2026-03-08 00:51:07 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:10.376887 | orchestrator | 2026-03-08 00:51:10 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:51:10.382167 | orchestrator | 2026-03-08 00:51:10 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:51:10.382457 | orchestrator | 2026-03-08 00:51:10 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:13.420407 | orchestrator | 2026-03-08 00:51:13 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:51:13.421036 | orchestrator | 2026-03-08 00:51:13 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:51:13.421063 | orchestrator | 2026-03-08 00:51:13 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:16.466668 | orchestrator | 2026-03-08 00:51:16 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:51:16.469316 | orchestrator | 2026-03-08 00:51:16 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:51:16.469394 | orchestrator | 2026-03-08 00:51:16 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:19.511498 | orchestrator | 2026-03-08 00:51:19 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:51:19.512415 | orchestrator | 2026-03-08 00:51:19 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:51:19.512460 | orchestrator | 2026-03-08 00:51:19 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:22.543915 | orchestrator | 2026-03-08 00:51:22 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:51:22.544339 | orchestrator | 2026-03-08 00:51:22 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:51:22.544362 | orchestrator | 2026-03-08 00:51:22 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:25.574647 | orchestrator | 2026-03-08 00:51:25 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:51:25.576913 | orchestrator | 2026-03-08 00:51:25 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:51:25.576970 | orchestrator | 2026-03-08 00:51:25 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:28.607672 | orchestrator | 2026-03-08 00:51:28 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:51:28.610417 | orchestrator | 2026-03-08 00:51:28 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:51:28.610485 | orchestrator | 2026-03-08 00:51:28 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:31.652299 | orchestrator | 2026-03-08 00:51:31 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:51:31.654492 | orchestrator | 2026-03-08 00:51:31 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:51:31.654883 | orchestrator | 2026-03-08 00:51:31 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:34.714675 | orchestrator | 2026-03-08 00:51:34 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:51:34.716179 | orchestrator | 2026-03-08 00:51:34 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:51:34.717068 | orchestrator | 2026-03-08 00:51:34 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:37.766403 | orchestrator | 2026-03-08 00:51:37 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:51:37.766777 | orchestrator | 2026-03-08 00:51:37 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:51:37.767145 | orchestrator | 2026-03-08 00:51:37 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:40.793990 | orchestrator | 2026-03-08 00:51:40 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:51:40.794389 | orchestrator | 2026-03-08 00:51:40 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:51:40.794423 | orchestrator | 2026-03-08 00:51:40 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:43.818613 | orchestrator | 2026-03-08 00:51:43 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:51:43.819204 | orchestrator | 2026-03-08 00:51:43 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:51:43.819233 | orchestrator | 2026-03-08 00:51:43 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:46.857514 | orchestrator | 2026-03-08 00:51:46 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:51:46.857682 | orchestrator | 2026-03-08 00:51:46 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:51:46.857695 | orchestrator | 2026-03-08 00:51:46 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:49.889385 | orchestrator | 2026-03-08 00:51:49 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:51:49.890416 | orchestrator | 2026-03-08 00:51:49 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:51:49.890482 | orchestrator | 2026-03-08 00:51:49 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:52.923830 | orchestrator | 2026-03-08 00:51:52 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:51:52.924024 | orchestrator | 2026-03-08 00:51:52 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:51:52.924044 | orchestrator | 2026-03-08 00:51:52 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:55.961721 | orchestrator | 2026-03-08 00:51:55 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:51:55.964207 | orchestrator | 2026-03-08 00:51:55 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:51:55.964690 | orchestrator | 2026-03-08 00:51:55 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:59.018156 | orchestrator | 2026-03-08 00:51:59 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:51:59.020137 | orchestrator | 2026-03-08 00:51:59 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:51:59.020223 | orchestrator | 2026-03-08 00:51:59 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:02.049581 | orchestrator | 2026-03-08 00:52:02 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:52:02.050091 | orchestrator | 2026-03-08 00:52:02 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:52:02.050114 | orchestrator | 2026-03-08 00:52:02 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:05.089272 | orchestrator | 2026-03-08 00:52:05 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:52:05.090729 | orchestrator | 2026-03-08 00:52:05 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:52:05.091183 | orchestrator | 2026-03-08 00:52:05 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:08.120383 | orchestrator | 2026-03-08 00:52:08 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:52:08.121529 | orchestrator | 2026-03-08 00:52:08 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:52:08.121565 | orchestrator | 2026-03-08 00:52:08 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:11.165856 | orchestrator | 2026-03-08 00:52:11 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:52:11.166085 | orchestrator | 2026-03-08 00:52:11 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:52:11.166660 | orchestrator | 2026-03-08 00:52:11 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:14.207783 | orchestrator | 2026-03-08 00:52:14 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:52:14.208728 | orchestrator | 2026-03-08 00:52:14 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:52:14.208880 | orchestrator | 2026-03-08 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:17.251808 | orchestrator | 2026-03-08 00:52:17 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:52:17.254607 | orchestrator | 2026-03-08 00:52:17 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:52:17.254687 | orchestrator | 2026-03-08 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:20.303086 | orchestrator | 2026-03-08 00:52:20 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:52:20.304543 | orchestrator | 2026-03-08 00:52:20 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:52:20.307485 | orchestrator | 2026-03-08 00:52:20 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:23.349171 | orchestrator | 2026-03-08 00:52:23 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:52:23.352587 | orchestrator | 2026-03-08 00:52:23 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:52:23.352652 | orchestrator | 2026-03-08 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:26.402978 | orchestrator | 2026-03-08 00:52:26 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:52:26.404799 | orchestrator | 2026-03-08 00:52:26 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:52:26.404859 | orchestrator | 2026-03-08 00:52:26 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:29.450826 | orchestrator | 2026-03-08 00:52:29 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:52:29.452424 | orchestrator | 2026-03-08 00:52:29 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:52:29.452778 | orchestrator | 2026-03-08 00:52:29 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:32.498945 | orchestrator | 2026-03-08 00:52:32 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:52:32.499534 | orchestrator | 2026-03-08 00:52:32 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:52:32.499876 | orchestrator | 2026-03-08 00:52:32 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:35.542756 | orchestrator | 2026-03-08 00:52:35 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:52:35.544928 | orchestrator | 2026-03-08 00:52:35 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:52:35.545489 | orchestrator | 2026-03-08 00:52:35 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:38.587711 | orchestrator | 2026-03-08 00:52:38 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:52:38.587807 | orchestrator | 2026-03-08 00:52:38 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:52:38.587818 | orchestrator | 2026-03-08 00:52:38 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:41.628968 | orchestrator | 2026-03-08 00:52:41 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:52:41.630161 | orchestrator | 2026-03-08 00:52:41 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:52:41.630203 | orchestrator | 2026-03-08 00:52:41 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:44.666641 | orchestrator | 2026-03-08 00:52:44 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:52:44.666712 | orchestrator | 2026-03-08 00:52:44 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:52:44.666726 | orchestrator | 2026-03-08 00:52:44 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:47.715969 | orchestrator | 2026-03-08 00:52:47 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:52:47.716064 | orchestrator | 2026-03-08 00:52:47 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:52:47.716096 | orchestrator | 2026-03-08 00:52:47 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:50.779371 | orchestrator | 2026-03-08 00:52:50 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:52:50.781097 | orchestrator | 2026-03-08 00:52:50 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:52:50.781150 | orchestrator | 2026-03-08 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:53.834237 | orchestrator | 2026-03-08 00:52:53 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:52:53.837105 | orchestrator | 2026-03-08 00:52:53 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:52:53.837226 | orchestrator | 2026-03-08 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:56.879408 | orchestrator | 2026-03-08 00:52:56 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:52:56.881027 | orchestrator | 2026-03-08 00:52:56 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:52:56.881093 | orchestrator | 2026-03-08 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:59.924705 | orchestrator | 2026-03-08 00:52:59 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:52:59.927310 | orchestrator | 2026-03-08 00:52:59 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:52:59.927367 | orchestrator | 2026-03-08 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:02.974591 | orchestrator | 2026-03-08 00:53:02 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:53:02.975863 | orchestrator | 2026-03-08 00:53:02 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:53:02.975906 | orchestrator | 2026-03-08 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:06.049558 | orchestrator | 2026-03-08 00:53:06 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:53:06.051341 | orchestrator | 2026-03-08 00:53:06 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:53:06.051409 | orchestrator | 2026-03-08 00:53:06 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:09.083531 | orchestrator | 2026-03-08 00:53:09 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:53:09.084905 | orchestrator | 2026-03-08 00:53:09 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:53:09.084942 | orchestrator | 2026-03-08 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:12.122624 | orchestrator | 2026-03-08 00:53:12 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:53:12.124172 | orchestrator | 2026-03-08 00:53:12 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:53:12.124322 | orchestrator | 2026-03-08 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:15.181165 | orchestrator | 2026-03-08 00:53:15 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:53:15.181261 | orchestrator | 2026-03-08 00:53:15 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:53:15.181274 | orchestrator | 2026-03-08 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:18.231742 | orchestrator | 2026-03-08 00:53:18 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:53:18.234477 | orchestrator | 2026-03-08 00:53:18 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:53:18.234546 | orchestrator | 2026-03-08 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:21.281036 | orchestrator | 2026-03-08 00:53:21 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:53:21.281143 | orchestrator | 2026-03-08 00:53:21 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:53:21.281159 | orchestrator | 2026-03-08 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:24.315514 | orchestrator | 2026-03-08 00:53:24 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:53:24.315595 | orchestrator | 2026-03-08 00:53:24 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:53:24.315609 | orchestrator | 2026-03-08 00:53:24 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:27.360419 | orchestrator | 2026-03-08 00:53:27 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:53:27.361704 | orchestrator | 2026-03-08 00:53:27 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:53:27.361852 | orchestrator | 2026-03-08 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:30.403889 | orchestrator | 2026-03-08 00:53:30 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:53:30.406304 | orchestrator | 2026-03-08 00:53:30 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:53:30.406370 | orchestrator | 2026-03-08 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:33.443681 | orchestrator | 2026-03-08 00:53:33 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:53:33.444525 | orchestrator | 2026-03-08 00:53:33 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:53:33.445849 | orchestrator | 2026-03-08 00:53:33 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:36.493618 | orchestrator | 2026-03-08 00:53:36 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:53:36.494348 | orchestrator | 2026-03-08 00:53:36 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:53:36.494390 | orchestrator | 2026-03-08 00:53:36 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:39.537804 | orchestrator | 2026-03-08 00:53:39 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:53:39.537954 | orchestrator | 2026-03-08 00:53:39 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:53:39.537971 | orchestrator | 2026-03-08 00:53:39 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:42.595166 | orchestrator | 2026-03-08 00:53:42 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:53:42.600221 | orchestrator | 2026-03-08 00:53:42 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:53:42.600296 | orchestrator | 2026-03-08 00:53:42 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:45.650663 | orchestrator | 2026-03-08 00:53:45 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:53:45.652290 | orchestrator | 2026-03-08 00:53:45 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:53:45.652425 | orchestrator | 2026-03-08 00:53:45 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:48.694935 | orchestrator | 2026-03-08 00:53:48 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:53:48.697485 | orchestrator | 2026-03-08 00:53:48 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:53:48.698083 | orchestrator | 2026-03-08 00:53:48 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:51.744967 | orchestrator | 2026-03-08 00:53:51 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:53:51.747171 | orchestrator | 2026-03-08 00:53:51 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state STARTED 2026-03-08 00:53:51.747234 | orchestrator | 2026-03-08 00:53:51 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:54.782111 | orchestrator | 2026-03-08 00:53:54 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:53:54.793094 | orchestrator | 2026-03-08 00:53:54 | INFO  | Task 7dad01f2-0f24-4417-b993-e5db968b081d is in state SUCCESS 2026-03-08 00:53:54.795062 | orchestrator | 2026-03-08 00:53:54.795122 | orchestrator | 2026-03-08 00:53:54.795131 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 00:53:54.795139 | orchestrator | 2026-03-08 00:53:54.795177 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 00:53:54.795205 | orchestrator | Sunday 08 March 2026 00:47:14 +0000 (0:00:00.298) 0:00:00.298 ********** 2026-03-08 00:53:54.795212 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:54.795220 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:54.795226 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:54.795232 | orchestrator | 2026-03-08 00:53:54.795239 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 00:53:54.795246 | orchestrator | Sunday 08 March 2026 00:47:15 +0000 (0:00:00.351) 0:00:00.650 ********** 2026-03-08 00:53:54.795255 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-08 00:53:54.795262 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-08 00:53:54.795267 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-08 00:53:54.795273 | orchestrator | 2026-03-08 00:53:54.795280 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-08 00:53:54.795286 | orchestrator | 2026-03-08 00:53:54.795306 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-08 00:53:54.795313 | orchestrator | Sunday 08 March 2026 00:47:15 +0000 (0:00:00.465) 0:00:01.115 ********** 2026-03-08 00:53:54.795320 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:54.795326 | orchestrator | 2026-03-08 00:53:54.795332 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-08 00:53:54.795338 | orchestrator | Sunday 08 March 2026 00:47:16 +0000 (0:00:00.622) 0:00:01.737 ********** 2026-03-08 00:53:54.795457 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:54.795464 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:54.795467 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:54.795471 | orchestrator | 2026-03-08 00:53:54.795475 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-08 00:53:54.795479 | orchestrator | Sunday 08 March 2026 00:47:16 +0000 (0:00:00.611) 0:00:02.348 ********** 2026-03-08 00:53:54.795483 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:54.795487 | orchestrator | 2026-03-08 00:53:54.795658 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-08 00:53:54.795663 | orchestrator | Sunday 08 March 2026 00:47:17 +0000 (0:00:00.685) 0:00:03.034 ********** 2026-03-08 00:53:54.795667 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:54.795671 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:54.795675 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:54.795678 | orchestrator | 2026-03-08 00:53:54.795682 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-08 00:53:54.795686 | orchestrator | Sunday 08 March 2026 00:47:18 +0000 (0:00:00.698) 0:00:03.733 ********** 2026-03-08 00:53:54.795690 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-08 00:53:54.795694 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-08 00:53:54.795698 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-08 00:53:54.795702 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-08 00:53:54.795706 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-08 00:53:54.795709 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-08 00:53:54.795713 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-08 00:53:54.795717 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-08 00:53:54.795721 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-08 00:53:54.795725 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-08 00:53:54.795736 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-08 00:53:54.795740 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-08 00:53:54.795743 | orchestrator | 2026-03-08 00:53:54.795747 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-08 00:53:54.795751 | orchestrator | Sunday 08 March 2026 00:47:20 +0000 (0:00:02.268) 0:00:06.001 ********** 2026-03-08 00:53:54.795755 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-08 00:53:54.795759 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-08 00:53:54.795762 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-08 00:53:54.795766 | orchestrator | 2026-03-08 00:53:54.795770 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-08 00:53:54.795774 | orchestrator | Sunday 08 March 2026 00:47:21 +0000 (0:00:00.921) 0:00:06.922 ********** 2026-03-08 00:53:54.795777 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-08 00:53:54.795781 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-08 00:53:54.795785 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-08 00:53:54.795789 | orchestrator | 2026-03-08 00:53:54.795792 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-08 00:53:54.795796 | orchestrator | Sunday 08 March 2026 00:47:22 +0000 (0:00:01.337) 0:00:08.260 ********** 2026-03-08 00:53:54.795800 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-08 00:53:54.795804 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.795817 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-08 00:53:54.795821 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.795825 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-08 00:53:54.795828 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.795832 | orchestrator | 2026-03-08 00:53:54.795836 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-08 00:53:54.795840 | orchestrator | Sunday 08 March 2026 00:47:23 +0000 (0:00:01.016) 0:00:09.277 ********** 2026-03-08 00:53:54.795851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:54.795859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:54.795863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:54.795872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:54.795876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:54.795880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-08 00:53:54.795890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:54.795897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-08 00:53:54.795901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-08 00:53:54.795905 | orchestrator | 2026-03-08 00:53:54.795909 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-08 00:53:54.795913 | orchestrator | Sunday 08 March 2026 00:47:26 +0000 (0:00:02.331) 0:00:11.608 ********** 2026-03-08 00:53:54.795920 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.795924 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.795927 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.795966 | orchestrator | 2026-03-08 00:53:54.795971 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-08 00:53:54.795974 | orchestrator | Sunday 08 March 2026 00:47:27 +0000 (0:00:01.189) 0:00:12.798 ********** 2026-03-08 00:53:54.795978 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-08 00:53:54.795982 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-08 00:53:54.795986 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-08 00:53:54.795990 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-08 00:53:54.795993 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-08 00:53:54.796017 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-08 00:53:54.796021 | orchestrator | 2026-03-08 00:53:54.796025 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-08 00:53:54.796029 | orchestrator | Sunday 08 March 2026 00:47:29 +0000 (0:00:02.525) 0:00:15.323 ********** 2026-03-08 00:53:54.796033 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.796036 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.796040 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.796044 | orchestrator | 2026-03-08 00:53:54.796048 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-08 00:53:54.796051 | orchestrator | Sunday 08 March 2026 00:47:31 +0000 (0:00:01.915) 0:00:17.238 ********** 2026-03-08 00:53:54.796055 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:54.796059 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:54.796063 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:54.796066 | orchestrator | 2026-03-08 00:53:54.796070 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-08 00:53:54.796074 | orchestrator | Sunday 08 March 2026 00:47:34 +0000 (0:00:02.426) 0:00:19.665 ********** 2026-03-08 00:53:54.796078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-08 00:53:54.796087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:54.796137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.796164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__22d8ccb99dff1e3ae35facdbcc496df3883287c6', '__omit_place_holder__22d8ccb99dff1e3ae35facdbcc496df3883287c6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-08 00:53:54.796819 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.796840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-08 00:53:54.796860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:54.796865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.796886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__22d8ccb99dff1e3ae35facdbcc496df3883287c6', '__omit_place_holder__22d8ccb99dff1e3ae35facdbcc496df3883287c6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-08 00:53:54.796926 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.796931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-08 00:53:54.796943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:54.796948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.796952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__22d8ccb99dff1e3ae35facdbcc496df3883287c6', '__omit_place_holder__22d8ccb99dff1e3ae35facdbcc496df3883287c6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-08 00:53:54.796956 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.796959 | orchestrator | 2026-03-08 00:53:54.796964 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-08 00:53:54.796968 | orchestrator | Sunday 08 March 2026 00:47:35 +0000 (0:00:01.062) 0:00:20.728 ********** 2026-03-08 00:53:54.796972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:54.796987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:54.796991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:54.797001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:54.797005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.797009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__22d8ccb99dff1e3ae35facdbcc496df3883287c6', '__omit_place_holder__22d8ccb99dff1e3ae35facdbcc496df3883287c6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-08 00:53:54.797013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:54.797017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.797032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__22d8ccb99dff1e3ae35facdbcc496df3883287c6', '__omit_place_holder__22d8ccb99dff1e3ae35facdbcc496df3883287c6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-08 00:53:54.797196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:54.797201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.797205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__22d8ccb99dff1e3ae35facdbcc496df3883287c6', '__omit_place_holder__22d8ccb99dff1e3ae35facdbcc496df3883287c6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-08 00:53:54.797209 | orchestrator | 2026-03-08 00:53:54.797213 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-08 00:53:54.797217 | orchestrator | Sunday 08 March 2026 00:47:39 +0000 (0:00:04.200) 0:00:24.928 ********** 2026-03-08 00:53:54.797221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:54.797225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:54.797241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:54.797251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:54.797255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:54.797259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:54.797263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-08 00:53:54.797268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-08 00:53:54.797272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-08 00:53:54.797275 | orchestrator | 2026-03-08 00:53:54.797279 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-08 00:53:54.797288 | orchestrator | Sunday 08 March 2026 00:47:42 +0000 (0:00:03.239) 0:00:28.168 ********** 2026-03-08 00:53:54.797292 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-08 00:53:54.797305 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-08 00:53:54.797309 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-08 00:53:54.797313 | orchestrator | 2026-03-08 00:53:54.797317 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-08 00:53:54.797320 | orchestrator | Sunday 08 March 2026 00:47:44 +0000 (0:00:01.875) 0:00:30.044 ********** 2026-03-08 00:53:54.797324 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-08 00:53:54.797328 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-08 00:53:54.797332 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-08 00:53:54.797336 | orchestrator | 2026-03-08 00:53:54.797339 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-08 00:53:54.797343 | orchestrator | Sunday 08 March 2026 00:47:48 +0000 (0:00:03.676) 0:00:33.720 ********** 2026-03-08 00:53:54.797350 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.797354 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.797358 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.797362 | orchestrator | 2026-03-08 00:53:54.797366 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-08 00:53:54.797369 | orchestrator | Sunday 08 March 2026 00:47:49 +0000 (0:00:00.775) 0:00:34.496 ********** 2026-03-08 00:53:54.797373 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-08 00:53:54.797378 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-08 00:53:54.797382 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-08 00:53:54.797385 | orchestrator | 2026-03-08 00:53:54.797389 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-08 00:53:54.797393 | orchestrator | Sunday 08 March 2026 00:47:52 +0000 (0:00:03.706) 0:00:38.203 ********** 2026-03-08 00:53:54.797397 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-08 00:53:54.797401 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-08 00:53:54.797404 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-08 00:53:54.797408 | orchestrator | 2026-03-08 00:53:54.797412 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-08 00:53:54.797416 | orchestrator | Sunday 08 March 2026 00:47:55 +0000 (0:00:02.883) 0:00:41.086 ********** 2026-03-08 00:53:54.797420 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-08 00:53:54.797424 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-08 00:53:54.797427 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-08 00:53:54.797431 | orchestrator | 2026-03-08 00:53:54.797435 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-08 00:53:54.797439 | orchestrator | Sunday 08 March 2026 00:47:58 +0000 (0:00:02.720) 0:00:43.806 ********** 2026-03-08 00:53:54.797442 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-08 00:53:54.797446 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-08 00:53:54.797450 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-08 00:53:54.797456 | orchestrator | 2026-03-08 00:53:54.797460 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-08 00:53:54.797464 | orchestrator | Sunday 08 March 2026 00:48:00 +0000 (0:00:02.053) 0:00:45.860 ********** 2026-03-08 00:53:54.797468 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:54.797471 | orchestrator | 2026-03-08 00:53:54.797475 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-08 00:53:54.797479 | orchestrator | Sunday 08 March 2026 00:48:01 +0000 (0:00:01.212) 0:00:47.073 ********** 2026-03-08 00:53:54.797483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:54.797497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:54.797572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:54.797579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:54.797583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:54.797587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:54.797594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-08 00:53:54.797598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-08 00:53:54.797613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-08 00:53:54.797617 | orchestrator | 2026-03-08 00:53:54.797621 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-08 00:53:54.797625 | orchestrator | Sunday 08 March 2026 00:48:05 +0000 (0:00:04.273) 0:00:51.346 ********** 2026-03-08 00:53:54.797631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-08 00:53:54.797635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:54.797639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.797646 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.797650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-08 00:53:54.797654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:54.797668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.797673 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.797679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-08 00:53:54.797683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:54.797687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.797693 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.797911 | orchestrator | 2026-03-08 00:53:54.797922 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-08 00:53:54.797926 | orchestrator | Sunday 08 March 2026 00:48:07 +0000 (0:00:01.377) 0:00:52.724 ********** 2026-03-08 00:53:54.797930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-08 00:53:54.797935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:54.797950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.797955 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.797959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-08 00:53:54.797966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-08 00:53:54.797970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:54.797978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.797982 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.797986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:54.797990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.797994 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.797999 | orchestrator | 2026-03-08 00:53:54.798003 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-08 00:53:54.798007 | orchestrator | Sunday 08 March 2026 00:48:08 +0000 (0:00:01.059) 0:00:53.784 ********** 2026-03-08 00:53:54.798065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-08 00:53:54.798073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:54.798077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.798085 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.798089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-08 00:53:54.798093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:54.798097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.798101 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.798187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-08 00:53:54.798195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:54.798202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.798210 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.798214 | orchestrator | 2026-03-08 00:53:54.798218 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-08 00:53:54.798222 | orchestrator | Sunday 08 March 2026 00:48:10 +0000 (0:00:01.626) 0:00:55.410 ********** 2026-03-08 00:53:54.798226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-08 00:53:54.798230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:54.798234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.798238 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.798242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-08 00:53:54.798257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:54.798264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.798271 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.798275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-08 00:53:54.798279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:54.798283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.798287 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.798291 | orchestrator | 2026-03-08 00:53:54.798295 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-08 00:53:54.798298 | orchestrator | Sunday 08 March 2026 00:48:11 +0000 (0:00:01.622) 0:00:57.033 ********** 2026-03-08 00:53:54.798302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-08 00:53:54.798315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:54.798327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.798331 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.798335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-08 00:53:54.798505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:54.798518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.798522 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.798526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-08 00:53:54.798541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:54.798545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.798560 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.798564 | orchestrator | 2026-03-08 00:53:54.798568 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-08 00:53:54.798572 | orchestrator | Sunday 08 March 2026 00:48:14 +0000 (0:00:02.752) 0:00:59.785 ********** 2026-03-08 00:53:54.798579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-08 00:53:54.798583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:54.798587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.798591 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.798595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-08 00:53:54.798599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:54.798615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.798638 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.798645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-08 00:53:54.798649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:54.798653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.798657 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.798661 | orchestrator | 2026-03-08 00:53:54.798665 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-08 00:53:54.798669 | orchestrator | Sunday 08 March 2026 00:48:17 +0000 (0:00:02.933) 0:01:02.719 ********** 2026-03-08 00:53:54.798673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-08 00:53:54.798677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:54.798694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.798699 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.798705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-08 00:53:54.798709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:54.798713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.798717 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.798721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-08 00:53:54.798725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:54.798732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.798736 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.798900 | orchestrator | 2026-03-08 00:53:54.799178 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-08 00:53:54.799247 | orchestrator | Sunday 08 March 2026 00:48:18 +0000 (0:00:01.299) 0:01:04.018 ********** 2026-03-08 00:53:54.799256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-08 00:53:54.799264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:54.799268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.799302 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.799308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-08 00:53:54.799312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:54.799323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.799327 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.799358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-08 00:53:54.799364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:54.799369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:54.799406 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.799522 | orchestrator | 2026-03-08 00:53:54.799530 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-08 00:53:54.799534 | orchestrator | Sunday 08 March 2026 00:48:19 +0000 (0:00:00.915) 0:01:04.934 ********** 2026-03-08 00:53:54.799538 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-08 00:53:54.799542 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-08 00:53:54.799545 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-08 00:53:54.799549 | orchestrator | 2026-03-08 00:53:54.799553 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-08 00:53:54.799557 | orchestrator | Sunday 08 March 2026 00:48:21 +0000 (0:00:01.995) 0:01:06.930 ********** 2026-03-08 00:53:54.799561 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-08 00:53:54.799564 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-08 00:53:54.799568 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-08 00:53:54.799578 | orchestrator | 2026-03-08 00:53:54.799581 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-08 00:53:54.799585 | orchestrator | Sunday 08 March 2026 00:48:23 +0000 (0:00:01.593) 0:01:08.524 ********** 2026-03-08 00:53:54.799589 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-08 00:53:54.799593 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-08 00:53:54.799597 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-08 00:53:54.799600 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-08 00:53:54.799604 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.799608 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-08 00:53:54.799612 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.799615 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-08 00:53:54.799619 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.799623 | orchestrator | 2026-03-08 00:53:54.799627 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-08 00:53:54.799630 | orchestrator | Sunday 08 March 2026 00:48:24 +0000 (0:00:01.034) 0:01:09.558 ********** 2026-03-08 00:53:54.799680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:54.799688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:54.799695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:54.799699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:54.799708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:54.799712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:54.799716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-08 00:53:54.799763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-08 00:53:54.799772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-08 00:53:54.799776 | orchestrator | 2026-03-08 00:53:54.799780 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-08 00:53:54.799784 | orchestrator | Sunday 08 March 2026 00:48:27 +0000 (0:00:03.020) 0:01:12.578 ********** 2026-03-08 00:53:54.799788 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:54.799792 | orchestrator | 2026-03-08 00:53:54.799796 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-08 00:53:54.799800 | orchestrator | Sunday 08 March 2026 00:48:27 +0000 (0:00:00.610) 0:01:13.188 ********** 2026-03-08 00:53:54.799805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-08 00:53:54.799814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-08 00:53:54.799819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.799823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.799894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-08 00:53:54.799906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-08 00:53:54.799910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-08 00:53:54.799958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.799964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-08 00:53:54.799968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.799999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.800007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.800012 | orchestrator | 2026-03-08 00:53:54.800016 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-08 00:53:54.800020 | orchestrator | Sunday 08 March 2026 00:48:32 +0000 (0:00:04.740) 0:01:17.929 ********** 2026-03-08 00:53:54.800042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-08 00:53:54.800047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-08 00:53:54.800051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.800055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.800059 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.800309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-08 00:53:54.800322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-08 00:53:54.800332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.800336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.800340 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.800344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-08 00:53:54.800348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-08 00:53:54.800423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.800432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.800441 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.800445 | orchestrator | 2026-03-08 00:53:54.800449 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-08 00:53:54.800453 | orchestrator | Sunday 08 March 2026 00:48:34 +0000 (0:00:01.515) 0:01:19.445 ********** 2026-03-08 00:53:54.800457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-08 00:53:54.800461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-08 00:53:54.800466 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.800469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-08 00:53:54.800473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-08 00:53:54.800477 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.800481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-08 00:53:54.800485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-08 00:53:54.800489 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.800492 | orchestrator | 2026-03-08 00:53:54.800496 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-08 00:53:54.800500 | orchestrator | Sunday 08 March 2026 00:48:35 +0000 (0:00:01.415) 0:01:20.860 ********** 2026-03-08 00:53:54.800503 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.800507 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.800511 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.800515 | orchestrator | 2026-03-08 00:53:54.800518 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-08 00:53:54.800522 | orchestrator | Sunday 08 March 2026 00:48:37 +0000 (0:00:01.987) 0:01:22.848 ********** 2026-03-08 00:53:54.800526 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.800530 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.800533 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.800537 | orchestrator | 2026-03-08 00:53:54.800541 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-08 00:53:54.800545 | orchestrator | Sunday 08 March 2026 00:48:39 +0000 (0:00:02.304) 0:01:25.152 ********** 2026-03-08 00:53:54.800548 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:54.800552 | orchestrator | 2026-03-08 00:53:54.800556 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-08 00:53:54.800559 | orchestrator | Sunday 08 March 2026 00:48:41 +0000 (0:00:01.428) 0:01:26.581 ********** 2026-03-08 00:53:54.801066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:54.801093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:54.801180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:54.801203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801211 | orchestrator | 2026-03-08 00:53:54.801215 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-08 00:53:54.801219 | orchestrator | Sunday 08 March 2026 00:48:45 +0000 (0:00:04.402) 0:01:30.984 ********** 2026-03-08 00:53:54.801223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-08 00:53:54.801227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801263 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.801269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-08 00:53:54.801273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801281 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.801285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-08 00:53:54.801294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801320 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.801324 | orchestrator | 2026-03-08 00:53:54.801328 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-08 00:53:54.801334 | orchestrator | Sunday 08 March 2026 00:48:47 +0000 (0:00:02.122) 0:01:33.107 ********** 2026-03-08 00:53:54.801338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-08 00:53:54.801342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-08 00:53:54.801346 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.801350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-08 00:53:54.801354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-08 00:53:54.801358 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.801362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-08 00:53:54.801366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-08 00:53:54.801370 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.801373 | orchestrator | 2026-03-08 00:53:54.801377 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-08 00:53:54.801381 | orchestrator | Sunday 08 March 2026 00:48:49 +0000 (0:00:01.487) 0:01:34.594 ********** 2026-03-08 00:53:54.801385 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.801388 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.801392 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.801396 | orchestrator | 2026-03-08 00:53:54.801400 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-08 00:53:54.801406 | orchestrator | Sunday 08 March 2026 00:48:50 +0000 (0:00:01.541) 0:01:36.135 ********** 2026-03-08 00:53:54.801410 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.801413 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.801417 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.801421 | orchestrator | 2026-03-08 00:53:54.801425 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-08 00:53:54.801428 | orchestrator | Sunday 08 March 2026 00:48:52 +0000 (0:00:02.123) 0:01:38.259 ********** 2026-03-08 00:53:54.801432 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.801436 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.801439 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.801443 | orchestrator | 2026-03-08 00:53:54.801447 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-08 00:53:54.801451 | orchestrator | Sunday 08 March 2026 00:48:53 +0000 (0:00:00.332) 0:01:38.592 ********** 2026-03-08 00:53:54.801454 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:54.801458 | orchestrator | 2026-03-08 00:53:54.801462 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-08 00:53:54.801466 | orchestrator | Sunday 08 March 2026 00:48:54 +0000 (0:00:00.934) 0:01:39.526 ********** 2026-03-08 00:53:54.801473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-08 00:53:54.801479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-08 00:53:54.801484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-08 00:53:54.801488 | orchestrator | 2026-03-08 00:53:54.801492 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-08 00:53:54.801498 | orchestrator | Sunday 08 March 2026 00:48:57 +0000 (0:00:03.324) 0:01:42.851 ********** 2026-03-08 00:53:54.801502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-08 00:53:54.801506 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.801510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-08 00:53:54.801514 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.801520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-08 00:53:54.801524 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.801528 | orchestrator | 2026-03-08 00:53:54.801532 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-08 00:53:54.801536 | orchestrator | Sunday 08 March 2026 00:48:59 +0000 (0:00:02.299) 0:01:45.151 ********** 2026-03-08 00:53:54.801543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-08 00:53:54.801548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-08 00:53:54.801553 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.801559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-08 00:53:54.801563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-08 00:53:54.801567 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.801571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-08 00:53:54.801575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-08 00:53:54.801579 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.801583 | orchestrator | 2026-03-08 00:53:54.801586 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-08 00:53:54.801590 | orchestrator | Sunday 08 March 2026 00:49:02 +0000 (0:00:02.796) 0:01:47.948 ********** 2026-03-08 00:53:54.801594 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.801598 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.801601 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.801605 | orchestrator | 2026-03-08 00:53:54.801609 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-08 00:53:54.801613 | orchestrator | Sunday 08 March 2026 00:49:03 +0000 (0:00:00.733) 0:01:48.681 ********** 2026-03-08 00:53:54.801616 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.801620 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.801624 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.801628 | orchestrator | 2026-03-08 00:53:54.801631 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-08 00:53:54.801653 | orchestrator | Sunday 08 March 2026 00:49:04 +0000 (0:00:01.210) 0:01:49.891 ********** 2026-03-08 00:53:54.801658 | orchestrator | included: cinder for testbed-node-0, testbed-node-2, testbed-node-1 2026-03-08 00:53:54.801662 | orchestrator | 2026-03-08 00:53:54.801666 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-08 00:53:54.801670 | orchestrator | Sunday 08 March 2026 00:49:05 +0000 (0:00:00.813) 0:01:50.704 ********** 2026-03-08 00:53:54.801675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:54.801685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:54.801742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:54.801775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801796 | orchestrator | 2026-03-08 00:53:54.801800 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-08 00:53:54.801804 | orchestrator | Sunday 08 March 2026 00:49:11 +0000 (0:00:05.691) 0:01:56.396 ********** 2026-03-08 00:53:54.801809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-08 00:53:54.801813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-08 00:53:54.801832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801839 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.801844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801858 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.801863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-08 00:53:54.801871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.801889 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.801894 | orchestrator | 2026-03-08 00:53:54.801898 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-08 00:53:54.801903 | orchestrator | Sunday 08 March 2026 00:49:12 +0000 (0:00:01.454) 0:01:57.851 ********** 2026-03-08 00:53:54.801907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-08 00:53:54.801911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-08 00:53:54.801916 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.801920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-08 00:53:54.801925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-08 00:53:54.801929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-08 00:53:54.801934 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.801939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-08 00:53:54.801943 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.801947 | orchestrator | 2026-03-08 00:53:54.801952 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-08 00:53:54.801956 | orchestrator | Sunday 08 March 2026 00:49:14 +0000 (0:00:01.767) 0:01:59.618 ********** 2026-03-08 00:53:54.801963 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.801968 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.801972 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.801981 | orchestrator | 2026-03-08 00:53:54.801986 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-08 00:53:54.801990 | orchestrator | Sunday 08 March 2026 00:49:16 +0000 (0:00:01.956) 0:02:01.574 ********** 2026-03-08 00:53:54.801995 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.801999 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.802004 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.802008 | orchestrator | 2026-03-08 00:53:54.802069 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-08 00:53:54.802075 | orchestrator | Sunday 08 March 2026 00:49:18 +0000 (0:00:02.206) 0:02:03.781 ********** 2026-03-08 00:53:54.802079 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.802097 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.802102 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.802105 | orchestrator | 2026-03-08 00:53:54.802109 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-08 00:53:54.802113 | orchestrator | Sunday 08 March 2026 00:49:18 +0000 (0:00:00.562) 0:02:04.343 ********** 2026-03-08 00:53:54.802117 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.802121 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.802124 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.802128 | orchestrator | 2026-03-08 00:53:54.802132 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-08 00:53:54.802136 | orchestrator | Sunday 08 March 2026 00:49:19 +0000 (0:00:00.390) 0:02:04.734 ********** 2026-03-08 00:53:54.802151 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:54.802155 | orchestrator | 2026-03-08 00:53:54.802161 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-08 00:53:54.802165 | orchestrator | Sunday 08 March 2026 00:49:20 +0000 (0:00:01.003) 0:02:05.738 ********** 2026-03-08 00:53:54.802170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 00:53:54.802175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 00:53:54.802179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 00:53:54.802238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 00:53:54.802283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 00:53:54.802290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 00:53:54.802321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802374 | orchestrator | 2026-03-08 00:53:54.802380 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-08 00:53:54.802386 | orchestrator | Sunday 08 March 2026 00:49:26 +0000 (0:00:05.845) 0:02:11.583 ********** 2026-03-08 00:53:54.802393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 00:53:54.802402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 00:53:54.802408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 00:53:54.802439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 00:53:54.802445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802470 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.802477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802515 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.802525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 00:53:54.802534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 00:53:54.802541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.802573 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.802579 | orchestrator | 2026-03-08 00:53:54.802585 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-08 00:53:54.802591 | orchestrator | Sunday 08 March 2026 00:49:27 +0000 (0:00:01.264) 0:02:12.847 ********** 2026-03-08 00:53:54.802600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-08 00:53:54.802606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-08 00:53:54.802612 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.802620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-08 00:53:54.802625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-08 00:53:54.802635 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.802643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-08 00:53:54.802653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-08 00:53:54.802659 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.802665 | orchestrator | 2026-03-08 00:53:54.802670 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-08 00:53:54.802676 | orchestrator | Sunday 08 March 2026 00:49:28 +0000 (0:00:01.282) 0:02:14.129 ********** 2026-03-08 00:53:54.802682 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.802688 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.802693 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.802698 | orchestrator | 2026-03-08 00:53:54.802704 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-08 00:53:54.802710 | orchestrator | Sunday 08 March 2026 00:49:30 +0000 (0:00:01.463) 0:02:15.592 ********** 2026-03-08 00:53:54.802716 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.802722 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.802728 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.802734 | orchestrator | 2026-03-08 00:53:54.802740 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-08 00:53:54.802746 | orchestrator | Sunday 08 March 2026 00:49:32 +0000 (0:00:01.932) 0:02:17.525 ********** 2026-03-08 00:53:54.802752 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.802758 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.802764 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.802769 | orchestrator | 2026-03-08 00:53:54.802775 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-08 00:53:54.802781 | orchestrator | Sunday 08 March 2026 00:49:32 +0000 (0:00:00.558) 0:02:18.083 ********** 2026-03-08 00:53:54.802787 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:54.802793 | orchestrator | 2026-03-08 00:53:54.802799 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-08 00:53:54.802804 | orchestrator | Sunday 08 March 2026 00:49:33 +0000 (0:00:00.820) 0:02:18.904 ********** 2026-03-08 00:53:54.802819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 00:53:54.802839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-08 00:53:54.802848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 00:53:54.802899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 00:53:54.802913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-08 00:53:54.802926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-08 00:53:54.802933 | orchestrator | 2026-03-08 00:53:54.802937 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-08 00:53:54.802941 | orchestrator | Sunday 08 March 2026 00:49:37 +0000 (0:00:04.334) 0:02:23.239 ********** 2026-03-08 00:53:54.802945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-08 00:53:54.802954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-08 00:53:54.802963 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.802968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-08 00:53:54.802975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-08 00:53:54.802981 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.802988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-08 00:53:54.802995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-08 00:53:54.803001 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.803005 | orchestrator | 2026-03-08 00:53:54.803009 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-08 00:53:54.803013 | orchestrator | Sunday 08 March 2026 00:49:41 +0000 (0:00:03.392) 0:02:26.631 ********** 2026-03-08 00:53:54.803017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-08 00:53:54.803023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-08 00:53:54.803027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-08 00:53:54.803031 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.803036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-08 00:53:54.803040 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.803043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-08 00:53:54.803047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-08 00:53:54.803051 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.803055 | orchestrator | 2026-03-08 00:53:54.803059 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-08 00:53:54.803066 | orchestrator | Sunday 08 March 2026 00:49:44 +0000 (0:00:03.265) 0:02:29.896 ********** 2026-03-08 00:53:54.803070 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.803074 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.803077 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.803081 | orchestrator | 2026-03-08 00:53:54.803085 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-08 00:53:54.803088 | orchestrator | Sunday 08 March 2026 00:49:45 +0000 (0:00:01.333) 0:02:31.230 ********** 2026-03-08 00:53:54.803092 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.803096 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.803100 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.803104 | orchestrator | 2026-03-08 00:53:54.803110 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-08 00:53:54.803114 | orchestrator | Sunday 08 March 2026 00:49:47 +0000 (0:00:02.003) 0:02:33.233 ********** 2026-03-08 00:53:54.803117 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.803121 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.803125 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.803128 | orchestrator | 2026-03-08 00:53:54.803132 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-08 00:53:54.803136 | orchestrator | Sunday 08 March 2026 00:49:48 +0000 (0:00:00.552) 0:02:33.786 ********** 2026-03-08 00:53:54.803154 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:54.803159 | orchestrator | 2026-03-08 00:53:54.803163 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-08 00:53:54.803166 | orchestrator | Sunday 08 March 2026 00:49:49 +0000 (0:00:00.858) 0:02:34.644 ********** 2026-03-08 00:53:54.803173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 00:53:54.803177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 00:53:54.803182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 00:53:54.803185 | orchestrator | 2026-03-08 00:53:54.803189 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-08 00:53:54.803196 | orchestrator | Sunday 08 March 2026 00:49:52 +0000 (0:00:03.295) 0:02:37.939 ********** 2026-03-08 00:53:54.803200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-08 00:53:54.803207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-08 00:53:54.803211 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.803214 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.803220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-08 00:53:54.803224 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.803228 | orchestrator | 2026-03-08 00:53:54.803232 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-08 00:53:54.803235 | orchestrator | Sunday 08 March 2026 00:49:53 +0000 (0:00:00.667) 0:02:38.606 ********** 2026-03-08 00:53:54.803239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-08 00:53:54.803244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-08 00:53:54.803248 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.803252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-08 00:53:54.803256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-08 00:53:54.803260 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.803264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-08 00:53:54.803267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-08 00:53:54.803274 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.803278 | orchestrator | 2026-03-08 00:53:54.803282 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-08 00:53:54.803286 | orchestrator | Sunday 08 March 2026 00:49:53 +0000 (0:00:00.652) 0:02:39.259 ********** 2026-03-08 00:53:54.803290 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.803294 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.803297 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.803301 | orchestrator | 2026-03-08 00:53:54.803305 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-08 00:53:54.803309 | orchestrator | Sunday 08 March 2026 00:49:55 +0000 (0:00:01.429) 0:02:40.688 ********** 2026-03-08 00:53:54.803312 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.803316 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.803320 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.803323 | orchestrator | 2026-03-08 00:53:54.803328 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-08 00:53:54.803331 | orchestrator | Sunday 08 March 2026 00:49:57 +0000 (0:00:02.231) 0:02:42.919 ********** 2026-03-08 00:53:54.803335 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.803339 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.803342 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.803346 | orchestrator | 2026-03-08 00:53:54.803350 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-08 00:53:54.803354 | orchestrator | Sunday 08 March 2026 00:49:58 +0000 (0:00:00.612) 0:02:43.531 ********** 2026-03-08 00:53:54.803358 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:54.803361 | orchestrator | 2026-03-08 00:53:54.803365 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-08 00:53:54.803369 | orchestrator | Sunday 08 March 2026 00:49:59 +0000 (0:00:00.937) 0:02:44.469 ********** 2026-03-08 00:53:54.803380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-08 00:53:54.803390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-08 00:53:54.803688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-08 00:53:54.803758 | orchestrator | 2026-03-08 00:53:54.803769 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-08 00:53:54.803776 | orchestrator | Sunday 08 March 2026 00:50:02 +0000 (0:00:03.774) 0:02:48.243 ********** 2026-03-08 00:53:54.803795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-08 00:53:54.803803 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.803815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-08 00:53:54.803826 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.803838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-08 00:53:54.803845 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.803850 | orchestrator | 2026-03-08 00:53:54.803856 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-08 00:53:54.803862 | orchestrator | Sunday 08 March 2026 00:50:04 +0000 (0:00:01.741) 0:02:49.984 ********** 2026-03-08 00:53:54.803872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-08 00:53:54.803885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-08 00:53:54.803893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-08 00:53:54.803900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-08 00:53:54.803905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-08 00:53:54.803909 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.803913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-08 00:53:54.803917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-08 00:53:54.803921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-08 00:53:54.803925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-08 00:53:54.803929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-08 00:53:54.803933 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.803940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-08 00:53:54.803944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-08 00:53:54.803948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-08 00:53:54.803956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-08 00:53:54.803960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-08 00:53:54.803964 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.803968 | orchestrator | 2026-03-08 00:53:54.803972 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-08 00:53:54.803977 | orchestrator | Sunday 08 March 2026 00:50:05 +0000 (0:00:01.019) 0:02:51.003 ********** 2026-03-08 00:53:54.803980 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.803984 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.803988 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.803991 | orchestrator | 2026-03-08 00:53:54.803995 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-08 00:53:54.803999 | orchestrator | Sunday 08 March 2026 00:50:07 +0000 (0:00:01.371) 0:02:52.375 ********** 2026-03-08 00:53:54.804003 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.804007 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.804011 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.804014 | orchestrator | 2026-03-08 00:53:54.804018 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-08 00:53:54.804022 | orchestrator | Sunday 08 March 2026 00:50:09 +0000 (0:00:02.188) 0:02:54.564 ********** 2026-03-08 00:53:54.804026 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.804032 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.804039 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.804046 | orchestrator | 2026-03-08 00:53:54.804054 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-08 00:53:54.804061 | orchestrator | Sunday 08 March 2026 00:50:09 +0000 (0:00:00.344) 0:02:54.908 ********** 2026-03-08 00:53:54.804068 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.804073 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.804079 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.804086 | orchestrator | 2026-03-08 00:53:54.804091 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-08 00:53:54.804097 | orchestrator | Sunday 08 March 2026 00:50:10 +0000 (0:00:00.530) 0:02:55.439 ********** 2026-03-08 00:53:54.804103 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:54.804109 | orchestrator | 2026-03-08 00:53:54.804114 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-08 00:53:54.804121 | orchestrator | Sunday 08 March 2026 00:50:11 +0000 (0:00:01.081) 0:02:56.521 ********** 2026-03-08 00:53:54.804128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:53:54.804161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:53:54.804172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:53:54.804178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:53:54.804186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:53:54.804193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:53:54.804202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:53:54.804212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:53:54.804217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:53:54.804223 | orchestrator | 2026-03-08 00:53:54.804231 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-08 00:53:54.804241 | orchestrator | Sunday 08 March 2026 00:50:14 +0000 (0:00:03.532) 0:03:00.053 ********** 2026-03-08 00:53:54.804247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-08 00:53:54.804253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:53:54.804267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:53:54.804274 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.804288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-08 00:53:54.804296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:53:54.804303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:53:54.804310 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.804317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-08 00:53:54.804331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:53:54.804342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:53:54.804346 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.804351 | orchestrator | 2026-03-08 00:53:54.804355 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-08 00:53:54.804360 | orchestrator | Sunday 08 March 2026 00:50:15 +0000 (0:00:00.934) 0:03:00.987 ********** 2026-03-08 00:53:54.804367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-08 00:53:54.804372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-08 00:53:54.804377 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.804381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-08 00:53:54.804386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-08 00:53:54.804391 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.804396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-08 00:53:54.804400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-08 00:53:54.804405 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.804409 | orchestrator | 2026-03-08 00:53:54.804413 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-08 00:53:54.804418 | orchestrator | Sunday 08 March 2026 00:50:16 +0000 (0:00:00.882) 0:03:01.869 ********** 2026-03-08 00:53:54.804423 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.804430 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.804433 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.804437 | orchestrator | 2026-03-08 00:53:54.804441 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-08 00:53:54.804444 | orchestrator | Sunday 08 March 2026 00:50:17 +0000 (0:00:01.307) 0:03:03.177 ********** 2026-03-08 00:53:54.804448 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.804452 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.804457 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.804463 | orchestrator | 2026-03-08 00:53:54.804473 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-08 00:53:54.804479 | orchestrator | Sunday 08 March 2026 00:50:20 +0000 (0:00:02.311) 0:03:05.489 ********** 2026-03-08 00:53:54.804485 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.804492 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.804498 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.804504 | orchestrator | 2026-03-08 00:53:54.804510 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-08 00:53:54.804516 | orchestrator | Sunday 08 March 2026 00:50:20 +0000 (0:00:00.611) 0:03:06.101 ********** 2026-03-08 00:53:54.804523 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:54.804528 | orchestrator | 2026-03-08 00:53:54.804531 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-08 00:53:54.804535 | orchestrator | Sunday 08 March 2026 00:50:21 +0000 (0:00:00.971) 0:03:07.073 ********** 2026-03-08 00:53:54.804543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 00:53:54.804551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.804556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 00:53:54.804563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 00:53:54.804568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.804575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.804579 | orchestrator | 2026-03-08 00:53:54.804582 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-08 00:53:54.804586 | orchestrator | Sunday 08 March 2026 00:50:26 +0000 (0:00:04.345) 0:03:11.418 ********** 2026-03-08 00:53:54.804590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-08 00:53:54.804594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.804600 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.804616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-08 00:53:54.804621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.804625 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.804632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-08 00:53:54.804638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.804645 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.804649 | orchestrator | 2026-03-08 00:53:54.804653 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-08 00:53:54.804657 | orchestrator | Sunday 08 March 2026 00:50:27 +0000 (0:00:01.330) 0:03:12.748 ********** 2026-03-08 00:53:54.804660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-08 00:53:54.804665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-08 00:53:54.804669 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.804673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-08 00:53:54.804677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-08 00:53:54.804680 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.804684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-08 00:53:54.804688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-08 00:53:54.804691 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.804695 | orchestrator | 2026-03-08 00:53:54.804699 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-08 00:53:54.804703 | orchestrator | Sunday 08 March 2026 00:50:28 +0000 (0:00:01.136) 0:03:13.885 ********** 2026-03-08 00:53:54.804707 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.804710 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.804714 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.804718 | orchestrator | 2026-03-08 00:53:54.804722 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-08 00:53:54.804725 | orchestrator | Sunday 08 March 2026 00:50:30 +0000 (0:00:01.503) 0:03:15.389 ********** 2026-03-08 00:53:54.804729 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.804733 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.804736 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.804740 | orchestrator | 2026-03-08 00:53:54.804744 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-08 00:53:54.804747 | orchestrator | Sunday 08 March 2026 00:50:32 +0000 (0:00:02.274) 0:03:17.663 ********** 2026-03-08 00:53:54.804751 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:54.804755 | orchestrator | 2026-03-08 00:53:54.804758 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-08 00:53:54.804762 | orchestrator | Sunday 08 March 2026 00:50:33 +0000 (0:00:01.357) 0:03:19.021 ********** 2026-03-08 00:53:54.804769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-08 00:53:54.804778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.804782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.804787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.804791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-08 00:53:54.804798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.804802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.804810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.804815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-08 00:53:54.804818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.804822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.804829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.804833 | orchestrator | 2026-03-08 00:53:54.804837 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-08 00:53:54.804840 | orchestrator | Sunday 08 March 2026 00:50:37 +0000 (0:00:03.891) 0:03:22.912 ********** 2026-03-08 00:53:54.804849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-08 00:53:54.804853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.804857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.804861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.804865 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.804869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-08 00:53:54.804876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-08 00:53:54.804887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.804891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.804895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.804899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.804903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.804907 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.804913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.804920 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.804924 | orchestrator | 2026-03-08 00:53:54.804928 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-08 00:53:54.804931 | orchestrator | Sunday 08 March 2026 00:50:38 +0000 (0:00:00.727) 0:03:23.640 ********** 2026-03-08 00:53:54.804935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-08 00:53:54.804941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-08 00:53:54.804945 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.804949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-08 00:53:54.804953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-08 00:53:54.804957 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.804961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-08 00:53:54.804964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-08 00:53:54.804968 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.804972 | orchestrator | 2026-03-08 00:53:54.804976 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-08 00:53:54.804980 | orchestrator | Sunday 08 March 2026 00:50:39 +0000 (0:00:01.247) 0:03:24.888 ********** 2026-03-08 00:53:54.804984 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.804987 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.804991 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.804994 | orchestrator | 2026-03-08 00:53:54.804998 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-08 00:53:54.805002 | orchestrator | Sunday 08 March 2026 00:50:40 +0000 (0:00:01.373) 0:03:26.261 ********** 2026-03-08 00:53:54.805006 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.805010 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.805013 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.805018 | orchestrator | 2026-03-08 00:53:54.805022 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-08 00:53:54.805026 | orchestrator | Sunday 08 March 2026 00:50:43 +0000 (0:00:02.096) 0:03:28.358 ********** 2026-03-08 00:53:54.805030 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:54.805033 | orchestrator | 2026-03-08 00:53:54.805037 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-08 00:53:54.805041 | orchestrator | Sunday 08 March 2026 00:50:44 +0000 (0:00:01.383) 0:03:29.742 ********** 2026-03-08 00:53:54.805045 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-08 00:53:54.805051 | orchestrator | 2026-03-08 00:53:54.805055 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-08 00:53:54.805059 | orchestrator | Sunday 08 March 2026 00:50:47 +0000 (0:00:02.877) 0:03:32.619 ********** 2026-03-08 00:53:54.805227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:53:54.805243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-08 00:53:54.805248 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.805252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:53:54.805261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-08 00:53:54.805265 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.805276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:53:54.805282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-08 00:53:54.805286 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.805290 | orchestrator | 2026-03-08 00:53:54.805293 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-08 00:53:54.805297 | orchestrator | Sunday 08 March 2026 00:50:49 +0000 (0:00:02.233) 0:03:34.852 ********** 2026-03-08 00:53:54.805307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:53:54.805314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-08 00:53:54.805318 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.805322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:53:54.805329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-08 00:53:54.805333 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.805342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:53:54.805346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-08 00:53:54.805350 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.805354 | orchestrator | 2026-03-08 00:53:54.805358 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-08 00:53:54.805362 | orchestrator | Sunday 08 March 2026 00:50:52 +0000 (0:00:03.001) 0:03:37.854 ********** 2026-03-08 00:53:54.805366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-08 00:53:54.805373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-08 00:53:54.805377 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.805381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-08 00:53:54.805387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-08 00:53:54.805391 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.805397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-08 00:53:54.805401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-08 00:53:54.805405 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.805409 | orchestrator | 2026-03-08 00:53:54.805413 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-08 00:53:54.805416 | orchestrator | Sunday 08 March 2026 00:50:55 +0000 (0:00:03.083) 0:03:40.937 ********** 2026-03-08 00:53:54.805423 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.805427 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.805431 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.805434 | orchestrator | 2026-03-08 00:53:54.805438 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-08 00:53:54.805442 | orchestrator | Sunday 08 March 2026 00:50:57 +0000 (0:00:01.832) 0:03:42.770 ********** 2026-03-08 00:53:54.805446 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.805450 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.805453 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.805457 | orchestrator | 2026-03-08 00:53:54.805461 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-08 00:53:54.805465 | orchestrator | Sunday 08 March 2026 00:50:58 +0000 (0:00:01.497) 0:03:44.267 ********** 2026-03-08 00:53:54.805469 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.805472 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.805476 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.805480 | orchestrator | 2026-03-08 00:53:54.805484 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-08 00:53:54.805487 | orchestrator | Sunday 08 March 2026 00:50:59 +0000 (0:00:00.352) 0:03:44.620 ********** 2026-03-08 00:53:54.805491 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:54.805495 | orchestrator | 2026-03-08 00:53:54.805499 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-08 00:53:54.805503 | orchestrator | Sunday 08 March 2026 00:51:00 +0000 (0:00:01.560) 0:03:46.181 ********** 2026-03-08 00:53:54.805507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-08 00:53:54.805515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-08 00:53:54.805523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-08 00:53:54.805527 | orchestrator | 2026-03-08 00:53:54.805542 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-08 00:53:54.805546 | orchestrator | Sunday 08 March 2026 00:51:02 +0000 (0:00:01.713) 0:03:47.894 ********** 2026-03-08 00:53:54.805556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-08 00:53:54.805561 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.805567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-08 00:53:54.805573 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.805580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-08 00:53:54.805586 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.805592 | orchestrator | 2026-03-08 00:53:54.805598 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-08 00:53:54.805604 | orchestrator | Sunday 08 March 2026 00:51:02 +0000 (0:00:00.446) 0:03:48.340 ********** 2026-03-08 00:53:54.805611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-08 00:53:54.805618 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.805627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-08 00:53:54.805634 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.805641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-08 00:53:54.805652 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.805659 | orchestrator | 2026-03-08 00:53:54.805666 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-08 00:53:54.805672 | orchestrator | Sunday 08 March 2026 00:51:03 +0000 (0:00:00.978) 0:03:49.319 ********** 2026-03-08 00:53:54.805679 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.805688 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.805693 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.805696 | orchestrator | 2026-03-08 00:53:54.805700 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-08 00:53:54.805704 | orchestrator | Sunday 08 March 2026 00:51:04 +0000 (0:00:00.493) 0:03:49.813 ********** 2026-03-08 00:53:54.805708 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.805712 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.805715 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.805719 | orchestrator | 2026-03-08 00:53:54.805723 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-08 00:53:54.805727 | orchestrator | Sunday 08 March 2026 00:51:05 +0000 (0:00:01.358) 0:03:51.171 ********** 2026-03-08 00:53:54.805731 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.805734 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.805739 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.805742 | orchestrator | 2026-03-08 00:53:54.805746 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-08 00:53:54.805750 | orchestrator | Sunday 08 March 2026 00:51:06 +0000 (0:00:00.343) 0:03:51.515 ********** 2026-03-08 00:53:54.805754 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:54.805757 | orchestrator | 2026-03-08 00:53:54.805761 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-08 00:53:54.805765 | orchestrator | Sunday 08 March 2026 00:51:07 +0000 (0:00:01.384) 0:03:52.900 ********** 2026-03-08 00:53:54.805769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 00:53:54.805773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.805782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.805792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.805797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-08 00:53:54.805802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.805807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 00:53:54.805812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:54.805823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.805830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:54.805835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.805840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.805844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.805849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-08 00:53:54.805859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 00:53:54.805866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.805871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.805876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:54.805881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-08 00:53:54.805886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:54.805895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.805900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:54.805907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.805912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 00:53:54.805916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.805921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-08 00:53:54.805931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-08 00:53:54.805937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-08 00:53:54.805943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:54.805947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.805952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-08 00:53:54.805960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-08 00:53:54.805967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 00:53:54.805975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.805979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.805984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.805989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-08 00:53:54.805999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:54.806009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:54.806078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 00:53:54.806088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-08 00:53:54.806118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:54.806122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-08 00:53:54.806134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-08 00:53:54.806157 | orchestrator | 2026-03-08 00:53:54.806165 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-08 00:53:54.806169 | orchestrator | Sunday 08 March 2026 00:51:11 +0000 (0:00:03.903) 0:03:56.803 ********** 2026-03-08 00:53:54.806173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 00:53:54.806180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-08 00:53:54.806206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 00:53:54.806231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:54.806240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:54.806254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 00:53:54.806300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-08 00:53:54.806318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-08 00:53:54.806323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:54.806334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:54.806340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 00:53:54.806345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:54.806356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-08 00:53:54.806367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-08 00:53:54.806384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 00:53:54.806444 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.806452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-08 00:53:54.806459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-08 00:53:54.806476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:54.806480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:54.806484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:54.806490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-08 00:53:54.806515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 00:53:54.806522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-08 00:53:54.806528 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.806535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-08 00:53:54.806555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:54.806567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-08 00:53:54.806580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-08 00:53:54.806587 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.806593 | orchestrator | 2026-03-08 00:53:54.806597 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-08 00:53:54.806601 | orchestrator | Sunday 08 March 2026 00:51:12 +0000 (0:00:01.276) 0:03:58.080 ********** 2026-03-08 00:53:54.806606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-08 00:53:54.806610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-08 00:53:54.806615 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.806621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-08 00:53:54.806625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-08 00:53:54.806629 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.806632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-08 00:53:54.806636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-08 00:53:54.806643 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.806647 | orchestrator | 2026-03-08 00:53:54.806651 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-08 00:53:54.806657 | orchestrator | Sunday 08 March 2026 00:51:14 +0000 (0:00:01.711) 0:03:59.791 ********** 2026-03-08 00:53:54.806662 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.806666 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.806669 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.806673 | orchestrator | 2026-03-08 00:53:54.806677 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-08 00:53:54.806681 | orchestrator | Sunday 08 March 2026 00:51:15 +0000 (0:00:01.224) 0:04:01.016 ********** 2026-03-08 00:53:54.806684 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.806688 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.806692 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.806696 | orchestrator | 2026-03-08 00:53:54.806699 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-08 00:53:54.806703 | orchestrator | Sunday 08 March 2026 00:51:17 +0000 (0:00:01.834) 0:04:02.850 ********** 2026-03-08 00:53:54.806707 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:54.806710 | orchestrator | 2026-03-08 00:53:54.806714 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-08 00:53:54.806718 | orchestrator | Sunday 08 March 2026 00:51:18 +0000 (0:00:01.112) 0:04:03.963 ********** 2026-03-08 00:53:54.806722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:54.806726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:54.806733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:54.806740 | orchestrator | 2026-03-08 00:53:54.806746 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-08 00:53:54.806752 | orchestrator | Sunday 08 March 2026 00:51:21 +0000 (0:00:03.242) 0:04:07.205 ********** 2026-03-08 00:53:54.806760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-08 00:53:54.806766 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.806772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-08 00:53:54.806779 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.806785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-08 00:53:54.806791 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.806797 | orchestrator | 2026-03-08 00:53:54.806803 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-08 00:53:54.806809 | orchestrator | Sunday 08 March 2026 00:51:22 +0000 (0:00:00.459) 0:04:07.665 ********** 2026-03-08 00:53:54.806814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-08 00:53:54.806825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-08 00:53:54.806832 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.806842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-08 00:53:54.806848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-08 00:53:54.806855 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.806862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-08 00:53:54.806871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-08 00:53:54.806878 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.806884 | orchestrator | 2026-03-08 00:53:54.806888 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-08 00:53:54.806891 | orchestrator | Sunday 08 March 2026 00:51:22 +0000 (0:00:00.671) 0:04:08.337 ********** 2026-03-08 00:53:54.806895 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.806899 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.806902 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.806906 | orchestrator | 2026-03-08 00:53:54.806910 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-08 00:53:54.806914 | orchestrator | Sunday 08 March 2026 00:51:24 +0000 (0:00:01.129) 0:04:09.467 ********** 2026-03-08 00:53:54.806917 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.806921 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.806925 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.806928 | orchestrator | 2026-03-08 00:53:54.806932 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-08 00:53:54.806936 | orchestrator | Sunday 08 March 2026 00:51:25 +0000 (0:00:01.889) 0:04:11.356 ********** 2026-03-08 00:53:54.806939 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:54.806943 | orchestrator | 2026-03-08 00:53:54.806947 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-08 00:53:54.806951 | orchestrator | Sunday 08 March 2026 00:51:27 +0000 (0:00:01.306) 0:04:12.663 ********** 2026-03-08 00:53:54.806955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:54.806962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:54.806981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.806992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:54.806999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.807005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.807009 | orchestrator | 2026-03-08 00:53:54.807013 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-08 00:53:54.807018 | orchestrator | Sunday 08 March 2026 00:51:31 +0000 (0:00:04.133) 0:04:16.796 ********** 2026-03-08 00:53:54.807024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-08 00:53:54.807036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.807043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.807049 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.807062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-08 00:53:54.807070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.807077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.807084 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.807090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-08 00:53:54.807103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.807111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.807115 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.807118 | orchestrator | 2026-03-08 00:53:54.807122 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-08 00:53:54.807126 | orchestrator | Sunday 08 March 2026 00:51:32 +0000 (0:00:01.257) 0:04:18.054 ********** 2026-03-08 00:53:54.807133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-08 00:53:54.807137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-08 00:53:54.807159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-08 00:53:54.807164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-08 00:53:54.807168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-08 00:53:54.807172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-08 00:53:54.807180 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.807184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-08 00:53:54.807188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-08 00:53:54.807192 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.807196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-08 00:53:54.807200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-08 00:53:54.807204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-08 00:53:54.807207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-08 00:53:54.807211 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.807215 | orchestrator | 2026-03-08 00:53:54.807219 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-08 00:53:54.807223 | orchestrator | Sunday 08 March 2026 00:51:33 +0000 (0:00:00.927) 0:04:18.981 ********** 2026-03-08 00:53:54.807226 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.807230 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.807234 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.807237 | orchestrator | 2026-03-08 00:53:54.807241 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-08 00:53:54.807245 | orchestrator | Sunday 08 March 2026 00:51:35 +0000 (0:00:01.483) 0:04:20.464 ********** 2026-03-08 00:53:54.807249 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.807253 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.807256 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.807260 | orchestrator | 2026-03-08 00:53:54.807267 | orchestrator | TASK [include_role : nova-cell] 2026-03-08 00:53:54 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:53:54.807271 | orchestrator | 2026-03-08 00:53:54 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:53:54.807274 | orchestrator | 2026-03-08 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:54.807278 | orchestrator | ************************************************ 2026-03-08 00:53:54.807282 | orchestrator | Sunday 08 March 2026 00:51:37 +0000 (0:00:02.178) 0:04:22.642 ********** 2026-03-08 00:53:54.807286 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:54.807289 | orchestrator | 2026-03-08 00:53:54.807293 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-08 00:53:54.807297 | orchestrator | Sunday 08 March 2026 00:51:38 +0000 (0:00:01.576) 0:04:24.219 ********** 2026-03-08 00:53:54.807304 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-08 00:53:54.807308 | orchestrator | 2026-03-08 00:53:54.807312 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-08 00:53:54.807319 | orchestrator | Sunday 08 March 2026 00:51:39 +0000 (0:00:00.772) 0:04:24.992 ********** 2026-03-08 00:53:54.807323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-08 00:53:54.807327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-08 00:53:54.807331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-08 00:53:54.807335 | orchestrator | 2026-03-08 00:53:54.807339 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-08 00:53:54.807343 | orchestrator | Sunday 08 March 2026 00:51:43 +0000 (0:00:04.152) 0:04:29.144 ********** 2026-03-08 00:53:54.807347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-08 00:53:54.807351 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.807355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-08 00:53:54.807359 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.807365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-08 00:53:54.807369 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.807373 | orchestrator | 2026-03-08 00:53:54.807377 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-08 00:53:54.807380 | orchestrator | Sunday 08 March 2026 00:51:44 +0000 (0:00:01.006) 0:04:30.150 ********** 2026-03-08 00:53:54.807388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-08 00:53:54.807394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-08 00:53:54.807398 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.807402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-08 00:53:54.807406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-08 00:53:54.807410 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.807414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-08 00:53:54.807418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-08 00:53:54.807421 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.807425 | orchestrator | 2026-03-08 00:53:54.807429 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-08 00:53:54.807433 | orchestrator | Sunday 08 March 2026 00:51:46 +0000 (0:00:01.372) 0:04:31.523 ********** 2026-03-08 00:53:54.807436 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.807440 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.807444 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.807447 | orchestrator | 2026-03-08 00:53:54.807451 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-08 00:53:54.807455 | orchestrator | Sunday 08 March 2026 00:51:48 +0000 (0:00:02.245) 0:04:33.768 ********** 2026-03-08 00:53:54.807459 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.807463 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.807466 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.807470 | orchestrator | 2026-03-08 00:53:54.807474 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-08 00:53:54.807478 | orchestrator | Sunday 08 March 2026 00:51:51 +0000 (0:00:02.682) 0:04:36.451 ********** 2026-03-08 00:53:54.807481 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-08 00:53:54.807485 | orchestrator | 2026-03-08 00:53:54.807489 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-08 00:53:54.807493 | orchestrator | Sunday 08 March 2026 00:51:52 +0000 (0:00:01.148) 0:04:37.600 ********** 2026-03-08 00:53:54.807497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-08 00:53:54.807501 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.807511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-08 00:53:54.807515 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.807519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-08 00:53:54.807523 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.807527 | orchestrator | 2026-03-08 00:53:54.807530 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-08 00:53:54.807536 | orchestrator | Sunday 08 March 2026 00:51:53 +0000 (0:00:01.110) 0:04:38.710 ********** 2026-03-08 00:53:54.807540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-08 00:53:54.807544 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.807548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-08 00:53:54.807552 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.807556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-08 00:53:54.807560 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.807564 | orchestrator | 2026-03-08 00:53:54.807667 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-08 00:53:54.807678 | orchestrator | Sunday 08 March 2026 00:51:54 +0000 (0:00:01.158) 0:04:39.868 ********** 2026-03-08 00:53:54.807684 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.807690 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.807697 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.807703 | orchestrator | 2026-03-08 00:53:54.807709 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-08 00:53:54.807716 | orchestrator | Sunday 08 March 2026 00:51:56 +0000 (0:00:01.564) 0:04:41.432 ********** 2026-03-08 00:53:54.807729 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:54.807737 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:54.807741 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:54.807744 | orchestrator | 2026-03-08 00:53:54.807748 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-08 00:53:54.807752 | orchestrator | Sunday 08 March 2026 00:51:58 +0000 (0:00:02.062) 0:04:43.495 ********** 2026-03-08 00:53:54.807755 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:54.807759 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:54.807763 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:54.807766 | orchestrator | 2026-03-08 00:53:54.807770 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-08 00:53:54.807774 | orchestrator | Sunday 08 March 2026 00:52:00 +0000 (0:00:02.622) 0:04:46.117 ********** 2026-03-08 00:53:54.807778 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-08 00:53:54.807782 | orchestrator | 2026-03-08 00:53:54.807785 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-08 00:53:54.807789 | orchestrator | Sunday 08 March 2026 00:52:01 +0000 (0:00:00.757) 0:04:46.874 ********** 2026-03-08 00:53:54.807793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-08 00:53:54.807797 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.807807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-08 00:53:54.807811 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.807814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-08 00:53:54.807818 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.807822 | orchestrator | 2026-03-08 00:53:54.807826 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-08 00:53:54.807829 | orchestrator | Sunday 08 March 2026 00:52:02 +0000 (0:00:01.112) 0:04:47.987 ********** 2026-03-08 00:53:54.807833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-08 00:53:54.807840 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.807856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-08 00:53:54.807860 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.807864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-08 00:53:54.807868 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.807872 | orchestrator | 2026-03-08 00:53:54.807875 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-08 00:53:54.807880 | orchestrator | Sunday 08 March 2026 00:52:03 +0000 (0:00:01.172) 0:04:49.159 ********** 2026-03-08 00:53:54.807886 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.807892 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.807898 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.807903 | orchestrator | 2026-03-08 00:53:54.807909 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-08 00:53:54.807915 | orchestrator | Sunday 08 March 2026 00:52:05 +0000 (0:00:01.343) 0:04:50.502 ********** 2026-03-08 00:53:54.807921 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:54.807927 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:54.807932 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:54.807939 | orchestrator | 2026-03-08 00:53:54.807944 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-08 00:53:54.807950 | orchestrator | Sunday 08 March 2026 00:52:07 +0000 (0:00:02.072) 0:04:52.574 ********** 2026-03-08 00:53:54.807955 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:54.807962 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:54.807968 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:54.807973 | orchestrator | 2026-03-08 00:53:54.807978 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-08 00:53:54.807983 | orchestrator | Sunday 08 March 2026 00:52:10 +0000 (0:00:02.840) 0:04:55.415 ********** 2026-03-08 00:53:54.807990 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:54.807995 | orchestrator | 2026-03-08 00:53:54.808001 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-08 00:53:54.808006 | orchestrator | Sunday 08 March 2026 00:52:11 +0000 (0:00:01.489) 0:04:56.905 ********** 2026-03-08 00:53:54.808016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:54.808030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 00:53:54.808036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 00:53:54.808059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 00:53:54.808068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.808074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:54.808083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 00:53:54.808095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 00:53:54.808101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 00:53:54.808119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.808126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:54.808132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 00:53:54.808155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 00:53:54.808168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 00:53:54.808174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.808180 | orchestrator | 2026-03-08 00:53:54.808186 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-08 00:53:54.808192 | orchestrator | Sunday 08 March 2026 00:52:14 +0000 (0:00:03.348) 0:05:00.253 ********** 2026-03-08 00:53:54.808212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-08 00:53:54.808218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 00:53:54.808224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 00:53:54.808233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 00:53:54.808244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.808250 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.808269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-08 00:53:54.808277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 00:53:54.808283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 00:53:54.808289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 00:53:54.808298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.808308 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.808314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-08 00:53:54.808321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 00:53:54.808341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 00:53:54.808348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 00:53:54.808355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:54.808361 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.808370 | orchestrator | 2026-03-08 00:53:54.808377 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-08 00:53:54.808383 | orchestrator | Sunday 08 March 2026 00:52:15 +0000 (0:00:00.689) 0:05:00.943 ********** 2026-03-08 00:53:54.808389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-08 00:53:54.808399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-08 00:53:54.808406 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.808412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-08 00:53:54.808419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-08 00:53:54.808425 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.808431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-08 00:53:54.808437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-08 00:53:54.808443 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.808449 | orchestrator | 2026-03-08 00:53:54.808455 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-08 00:53:54.808460 | orchestrator | Sunday 08 March 2026 00:52:16 +0000 (0:00:01.213) 0:05:02.156 ********** 2026-03-08 00:53:54.808466 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.808472 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.808478 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.808484 | orchestrator | 2026-03-08 00:53:54.808489 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-08 00:53:54.808496 | orchestrator | Sunday 08 March 2026 00:52:18 +0000 (0:00:01.285) 0:05:03.442 ********** 2026-03-08 00:53:54.808501 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.808507 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.808513 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.808518 | orchestrator | 2026-03-08 00:53:54.808525 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-08 00:53:54.808547 | orchestrator | Sunday 08 March 2026 00:52:20 +0000 (0:00:01.997) 0:05:05.439 ********** 2026-03-08 00:53:54.808554 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:54.808560 | orchestrator | 2026-03-08 00:53:54.808566 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-08 00:53:54.808572 | orchestrator | Sunday 08 March 2026 00:52:21 +0000 (0:00:01.423) 0:05:06.863 ********** 2026-03-08 00:53:54.808579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:53:54.808593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:53:54.808606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:53:54.808614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:53:54.808638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:53:54.808651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:53:54.808658 | orchestrator | 2026-03-08 00:53:54.808664 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-08 00:53:54.808670 | orchestrator | Sunday 08 March 2026 00:52:27 +0000 (0:00:05.523) 0:05:12.386 ********** 2026-03-08 00:53:54.808679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-08 00:53:54.808685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-08 00:53:54.808707 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.808715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-08 00:53:54.808729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-08 00:53:54.808736 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.808745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-08 00:53:54.808752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-08 00:53:54.808759 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.808765 | orchestrator | 2026-03-08 00:53:54.808771 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-08 00:53:54.808777 | orchestrator | Sunday 08 March 2026 00:52:27 +0000 (0:00:00.665) 0:05:13.052 ********** 2026-03-08 00:53:54.808800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-08 00:53:54.808808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-08 00:53:54.808821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-08 00:53:54.808828 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.808834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-08 00:53:54.808839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-08 00:53:54.808845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-08 00:53:54.808851 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.808858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-08 00:53:54.808864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-08 00:53:54.808870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-08 00:53:54.808875 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.808881 | orchestrator | 2026-03-08 00:53:54.808888 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-08 00:53:54.808897 | orchestrator | Sunday 08 March 2026 00:52:28 +0000 (0:00:00.936) 0:05:13.988 ********** 2026-03-08 00:53:54.808904 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.808911 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.808916 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.808919 | orchestrator | 2026-03-08 00:53:54.808924 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-08 00:53:54.808930 | orchestrator | Sunday 08 March 2026 00:52:29 +0000 (0:00:00.851) 0:05:14.840 ********** 2026-03-08 00:53:54.808937 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.808942 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.808948 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.808956 | orchestrator | 2026-03-08 00:53:54.808960 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-08 00:53:54.808965 | orchestrator | Sunday 08 March 2026 00:52:30 +0000 (0:00:01.364) 0:05:16.204 ********** 2026-03-08 00:53:54.808970 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:54.808976 | orchestrator | 2026-03-08 00:53:54.808985 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-08 00:53:54.808994 | orchestrator | Sunday 08 March 2026 00:52:32 +0000 (0:00:01.442) 0:05:17.647 ********** 2026-03-08 00:53:54.809000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-08 00:53:54.809031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 00:53:54.809040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:54.809046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:54.809053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 00:53:54.809064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-08 00:53:54.809070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 00:53:54.809082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:54.809103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:54.809109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 00:53:54.809113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-08 00:53:54.809117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 00:53:54.809124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:54.809128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:54.809137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 00:53:54.809168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-08 00:53:54.809174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-08 00:53:54.809178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:54.809184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:54.809188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-08 00:53:54.809196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-08 00:53:54.809205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-08 00:53:54.809209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:54.809213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:54.809220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-08 00:53:54.809224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-08 00:53:54.809236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-08 00:53:54.809240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:54.809244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:54.809248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-08 00:53:54.809252 | orchestrator | 2026-03-08 00:53:54.809256 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-08 00:53:54.809263 | orchestrator | Sunday 08 March 2026 00:52:37 +0000 (0:00:04.723) 0:05:22.370 ********** 2026-03-08 00:53:54.809268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-08 00:53:54.809278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 00:53:54.809282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:54.809288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:54.809292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 00:53:54.809296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-08 00:53:54.809335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-08 00:53:54.809351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-08 00:53:54.809355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:54.809363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 00:53:54.809367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:54.809371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:54.809375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-08 00:53:54.809381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:54.809389 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.809393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 00:53:54.809397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-08 00:53:54.809404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-08 00:53:54.809408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-08 00:53:54.809412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 00:53:54.809421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:54.809425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:54.809429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:54.809436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:54.809440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-08 00:53:54.809444 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.809448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 00:53:54.809452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-08 00:53:54.809461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-08 00:53:54.809465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:54.809469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:54.809475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-08 00:53:54.809479 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.809483 | orchestrator | 2026-03-08 00:53:54.809487 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-08 00:53:54.809491 | orchestrator | Sunday 08 March 2026 00:52:38 +0000 (0:00:01.285) 0:05:23.656 ********** 2026-03-08 00:53:54.809495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-08 00:53:54.809499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-08 00:53:54.809504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-08 00:53:54.809511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-08 00:53:54.809515 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.809519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-08 00:53:54.809523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-08 00:53:54.809529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-08 00:53:54.809533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-08 00:53:54.809537 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.809541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-08 00:53:54.809545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-08 00:53:54.809548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-08 00:53:54.809552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-08 00:53:54.809556 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.809560 | orchestrator | 2026-03-08 00:53:54.809564 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-08 00:53:54.809568 | orchestrator | Sunday 08 March 2026 00:52:39 +0000 (0:00:01.095) 0:05:24.752 ********** 2026-03-08 00:53:54.809574 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.809578 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.809582 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.809585 | orchestrator | 2026-03-08 00:53:54.809589 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-08 00:53:54.809593 | orchestrator | Sunday 08 March 2026 00:52:39 +0000 (0:00:00.451) 0:05:25.204 ********** 2026-03-08 00:53:54.809597 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.809600 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.809604 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.809608 | orchestrator | 2026-03-08 00:53:54.809612 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-08 00:53:54.809615 | orchestrator | Sunday 08 March 2026 00:52:41 +0000 (0:00:01.703) 0:05:26.907 ********** 2026-03-08 00:53:54.809619 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:54.809626 | orchestrator | 2026-03-08 00:53:54.809630 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-08 00:53:54.809634 | orchestrator | Sunday 08 March 2026 00:52:43 +0000 (0:00:01.795) 0:05:28.703 ********** 2026-03-08 00:53:54.809638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-08 00:53:54.809645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-08 00:53:54.809649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-08 00:53:54.809653 | orchestrator | 2026-03-08 00:53:54.809657 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-08 00:53:54.809660 | orchestrator | Sunday 08 March 2026 00:52:45 +0000 (0:00:02.519) 0:05:31.222 ********** 2026-03-08 00:53:54.809667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-08 00:53:54.809674 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.809679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-08 00:53:54.809683 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.809690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-08 00:53:54.809694 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.809698 | orchestrator | 2026-03-08 00:53:54.809702 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-08 00:53:54.809706 | orchestrator | Sunday 08 March 2026 00:52:46 +0000 (0:00:00.398) 0:05:31.621 ********** 2026-03-08 00:53:54.809710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-08 00:53:54.809714 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.809718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-08 00:53:54.809722 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.809726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-08 00:53:54.809730 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.809734 | orchestrator | 2026-03-08 00:53:54.809737 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-08 00:53:54.809741 | orchestrator | Sunday 08 March 2026 00:52:47 +0000 (0:00:01.047) 0:05:32.669 ********** 2026-03-08 00:53:54.809745 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.809751 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.809755 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.809759 | orchestrator | 2026-03-08 00:53:54.809763 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-08 00:53:54.809769 | orchestrator | Sunday 08 March 2026 00:52:47 +0000 (0:00:00.423) 0:05:33.093 ********** 2026-03-08 00:53:54.809773 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.809777 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.809781 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.809784 | orchestrator | 2026-03-08 00:53:54.809788 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-08 00:53:54.809792 | orchestrator | Sunday 08 March 2026 00:52:49 +0000 (0:00:01.428) 0:05:34.521 ********** 2026-03-08 00:53:54.809796 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:54.809799 | orchestrator | 2026-03-08 00:53:54.809803 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-08 00:53:54.809807 | orchestrator | Sunday 08 March 2026 00:52:51 +0000 (0:00:01.856) 0:05:36.377 ********** 2026-03-08 00:53:54.809811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:54.809818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:54.809823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:54.809832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:54.809836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:54.809840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:54.809844 | orchestrator | 2026-03-08 00:53:54.809848 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-08 00:53:54.809854 | orchestrator | Sunday 08 March 2026 00:52:57 +0000 (0:00:06.328) 0:05:42.706 ********** 2026-03-08 00:53:54.809858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-08 00:53:54.809867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-08 00:53:54.809871 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.809875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-08 00:53:54.809879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-08 00:53:54.809883 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.809888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-08 00:53:54.809895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-08 00:53:54.809899 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.809903 | orchestrator | 2026-03-08 00:53:54.809907 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-08 00:53:54.809913 | orchestrator | Sunday 08 March 2026 00:52:58 +0000 (0:00:00.698) 0:05:43.405 ********** 2026-03-08 00:53:54.809916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-08 00:53:54.809920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-08 00:53:54.809924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-08 00:53:54.809928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-08 00:53:54.809932 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.809936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-08 00:53:54.809940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-08 00:53:54.809944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-08 00:53:54.809948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-08 00:53:54.809952 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.809955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-08 00:53:54.809961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-08 00:53:54.809965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-08 00:53:54.809973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-08 00:53:54.809977 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.809981 | orchestrator | 2026-03-08 00:53:54.809984 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-08 00:53:54.809988 | orchestrator | Sunday 08 March 2026 00:52:59 +0000 (0:00:01.815) 0:05:45.221 ********** 2026-03-08 00:53:54.809992 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.809996 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.810000 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.810006 | orchestrator | 2026-03-08 00:53:54.810035 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-08 00:53:54.810042 | orchestrator | Sunday 08 March 2026 00:53:01 +0000 (0:00:01.398) 0:05:46.619 ********** 2026-03-08 00:53:54.810048 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.810057 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.810063 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.810069 | orchestrator | 2026-03-08 00:53:54.810075 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-08 00:53:54.810082 | orchestrator | Sunday 08 March 2026 00:53:03 +0000 (0:00:02.266) 0:05:48.886 ********** 2026-03-08 00:53:54.810089 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.810095 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.810102 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.810109 | orchestrator | 2026-03-08 00:53:54.810116 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-08 00:53:54.810122 | orchestrator | Sunday 08 March 2026 00:53:03 +0000 (0:00:00.318) 0:05:49.204 ********** 2026-03-08 00:53:54.810129 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.810135 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.810159 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.810166 | orchestrator | 2026-03-08 00:53:54.810171 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-08 00:53:54.810183 | orchestrator | Sunday 08 March 2026 00:53:04 +0000 (0:00:00.381) 0:05:49.585 ********** 2026-03-08 00:53:54.810188 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.810192 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.810196 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.810200 | orchestrator | 2026-03-08 00:53:54.810204 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-08 00:53:54.810208 | orchestrator | Sunday 08 March 2026 00:53:04 +0000 (0:00:00.685) 0:05:50.271 ********** 2026-03-08 00:53:54.810212 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.810215 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.810219 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.810223 | orchestrator | 2026-03-08 00:53:54.810226 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-08 00:53:54.810230 | orchestrator | Sunday 08 March 2026 00:53:05 +0000 (0:00:00.351) 0:05:50.622 ********** 2026-03-08 00:53:54.810234 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.810237 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.810241 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.810245 | orchestrator | 2026-03-08 00:53:54.810249 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-08 00:53:54.810253 | orchestrator | Sunday 08 March 2026 00:53:05 +0000 (0:00:00.412) 0:05:51.035 ********** 2026-03-08 00:53:54.810257 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.810260 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.810264 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.810268 | orchestrator | 2026-03-08 00:53:54.810278 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-08 00:53:54.810282 | orchestrator | Sunday 08 March 2026 00:53:06 +0000 (0:00:00.875) 0:05:51.910 ********** 2026-03-08 00:53:54.810286 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:54.810290 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:54.810293 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:54.810297 | orchestrator | 2026-03-08 00:53:54.810301 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-08 00:53:54.810305 | orchestrator | Sunday 08 March 2026 00:53:07 +0000 (0:00:00.746) 0:05:52.657 ********** 2026-03-08 00:53:54.810308 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:54.810312 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:54.810316 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:54.810320 | orchestrator | 2026-03-08 00:53:54.810323 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-08 00:53:54.810327 | orchestrator | Sunday 08 March 2026 00:53:07 +0000 (0:00:00.376) 0:05:53.034 ********** 2026-03-08 00:53:54.810331 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:54.810334 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:54.810338 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:54.810342 | orchestrator | 2026-03-08 00:53:54.810346 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-08 00:53:54.810350 | orchestrator | Sunday 08 March 2026 00:53:08 +0000 (0:00:00.950) 0:05:53.985 ********** 2026-03-08 00:53:54.810354 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:54.810357 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:54.810361 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:54.810365 | orchestrator | 2026-03-08 00:53:54.810368 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-08 00:53:54.810372 | orchestrator | Sunday 08 March 2026 00:53:09 +0000 (0:00:01.225) 0:05:55.210 ********** 2026-03-08 00:53:54.810376 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:54.810380 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:54.810383 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:54.810387 | orchestrator | 2026-03-08 00:53:54.810394 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-08 00:53:54.810398 | orchestrator | Sunday 08 March 2026 00:53:10 +0000 (0:00:01.025) 0:05:56.236 ********** 2026-03-08 00:53:54.810402 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.810406 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.810410 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.810413 | orchestrator | 2026-03-08 00:53:54.810418 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-08 00:53:54.810425 | orchestrator | Sunday 08 March 2026 00:53:21 +0000 (0:00:10.491) 0:06:06.728 ********** 2026-03-08 00:53:54.810430 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:54.810436 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:54.810442 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:54.810448 | orchestrator | 2026-03-08 00:53:54.810455 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-08 00:53:54.810461 | orchestrator | Sunday 08 March 2026 00:53:22 +0000 (0:00:00.714) 0:06:07.442 ********** 2026-03-08 00:53:54.810467 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.810474 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.810481 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.810486 | orchestrator | 2026-03-08 00:53:54.810493 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-08 00:53:54.810498 | orchestrator | Sunday 08 March 2026 00:53:33 +0000 (0:00:10.923) 0:06:18.366 ********** 2026-03-08 00:53:54.810502 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:54.810506 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:54.810509 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:54.810513 | orchestrator | 2026-03-08 00:53:54.810517 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-08 00:53:54.810521 | orchestrator | Sunday 08 March 2026 00:53:38 +0000 (0:00:05.136) 0:06:23.502 ********** 2026-03-08 00:53:54.810529 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:54.810532 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:54.810536 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:54.810540 | orchestrator | 2026-03-08 00:53:54.810544 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-08 00:53:54.810547 | orchestrator | Sunday 08 March 2026 00:53:46 +0000 (0:00:08.261) 0:06:31.764 ********** 2026-03-08 00:53:54.810551 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.810555 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.810559 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.810562 | orchestrator | 2026-03-08 00:53:54.810567 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-08 00:53:54.810570 | orchestrator | Sunday 08 March 2026 00:53:46 +0000 (0:00:00.338) 0:06:32.103 ********** 2026-03-08 00:53:54.810574 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.810581 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.810585 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.810589 | orchestrator | 2026-03-08 00:53:54.810593 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-08 00:53:54.810596 | orchestrator | Sunday 08 March 2026 00:53:47 +0000 (0:00:00.389) 0:06:32.492 ********** 2026-03-08 00:53:54.810600 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.810604 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.810608 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.810612 | orchestrator | 2026-03-08 00:53:54.810615 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-08 00:53:54.810619 | orchestrator | Sunday 08 March 2026 00:53:47 +0000 (0:00:00.696) 0:06:33.189 ********** 2026-03-08 00:53:54.810623 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.810627 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.810631 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.810634 | orchestrator | 2026-03-08 00:53:54.810638 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-08 00:53:54.810642 | orchestrator | Sunday 08 March 2026 00:53:48 +0000 (0:00:00.340) 0:06:33.530 ********** 2026-03-08 00:53:54.810646 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.810650 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.810653 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.810657 | orchestrator | 2026-03-08 00:53:54.810661 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-08 00:53:54.810665 | orchestrator | Sunday 08 March 2026 00:53:48 +0000 (0:00:00.338) 0:06:33.869 ********** 2026-03-08 00:53:54.810668 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:54.810672 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:54.810676 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:54.810680 | orchestrator | 2026-03-08 00:53:54.810684 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-08 00:53:54.810688 | orchestrator | Sunday 08 March 2026 00:53:48 +0000 (0:00:00.353) 0:06:34.223 ********** 2026-03-08 00:53:54.810692 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:54.810695 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:54.810699 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:54.810703 | orchestrator | 2026-03-08 00:53:54.810707 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-08 00:53:54.810710 | orchestrator | Sunday 08 March 2026 00:53:50 +0000 (0:00:01.297) 0:06:35.520 ********** 2026-03-08 00:53:54.810714 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:54.810718 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:54.810722 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:54.810725 | orchestrator | 2026-03-08 00:53:54.810729 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:53:54.810733 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-08 00:53:54.810741 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-08 00:53:54.810745 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-08 00:53:54.810749 | orchestrator | 2026-03-08 00:53:54.810753 | orchestrator | 2026-03-08 00:53:54.810760 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:53:54.810764 | orchestrator | Sunday 08 March 2026 00:53:51 +0000 (0:00:00.890) 0:06:36.411 ********** 2026-03-08 00:53:54.810767 | orchestrator | =============================================================================== 2026-03-08 00:53:54.810771 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 10.92s 2026-03-08 00:53:54.810775 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.49s 2026-03-08 00:53:54.810779 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.26s 2026-03-08 00:53:54.810782 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.33s 2026-03-08 00:53:54.810786 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.85s 2026-03-08 00:53:54.810790 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 5.69s 2026-03-08 00:53:54.810794 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.52s 2026-03-08 00:53:54.810798 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 5.14s 2026-03-08 00:53:54.810802 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.74s 2026-03-08 00:53:54.810806 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.72s 2026-03-08 00:53:54.810810 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.40s 2026-03-08 00:53:54.810814 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.35s 2026-03-08 00:53:54.810818 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.34s 2026-03-08 00:53:54.810822 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.27s 2026-03-08 00:53:54.810826 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.20s 2026-03-08 00:53:54.810829 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.15s 2026-03-08 00:53:54.810833 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.13s 2026-03-08 00:53:54.810837 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 3.90s 2026-03-08 00:53:54.810841 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.89s 2026-03-08 00:53:54.810845 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.77s 2026-03-08 00:53:57.839895 | orchestrator | 2026-03-08 00:53:57 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:53:57.840019 | orchestrator | 2026-03-08 00:53:57 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:53:57.840045 | orchestrator | 2026-03-08 00:53:57 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:53:57.840066 | orchestrator | 2026-03-08 00:53:57 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:00.885808 | orchestrator | 2026-03-08 00:54:00 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:54:00.888962 | orchestrator | 2026-03-08 00:54:00 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:54:00.889615 | orchestrator | 2026-03-08 00:54:00 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:54:00.889664 | orchestrator | 2026-03-08 00:54:00 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:03.921499 | orchestrator | 2026-03-08 00:54:03 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:54:03.921730 | orchestrator | 2026-03-08 00:54:03 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:54:03.923420 | orchestrator | 2026-03-08 00:54:03 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:54:03.923487 | orchestrator | 2026-03-08 00:54:03 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:06.962718 | orchestrator | 2026-03-08 00:54:06 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:54:06.964693 | orchestrator | 2026-03-08 00:54:06 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:54:06.966522 | orchestrator | 2026-03-08 00:54:06 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:54:06.966630 | orchestrator | 2026-03-08 00:54:06 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:10.027268 | orchestrator | 2026-03-08 00:54:10 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:54:10.027374 | orchestrator | 2026-03-08 00:54:10 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:54:10.027860 | orchestrator | 2026-03-08 00:54:10 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:54:10.027928 | orchestrator | 2026-03-08 00:54:10 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:13.087435 | orchestrator | 2026-03-08 00:54:13 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:54:13.087643 | orchestrator | 2026-03-08 00:54:13 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:54:13.088202 | orchestrator | 2026-03-08 00:54:13 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:54:13.088249 | orchestrator | 2026-03-08 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:16.136547 | orchestrator | 2026-03-08 00:54:16 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:54:16.139519 | orchestrator | 2026-03-08 00:54:16 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:54:16.140451 | orchestrator | 2026-03-08 00:54:16 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:54:16.140497 | orchestrator | 2026-03-08 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:19.178661 | orchestrator | 2026-03-08 00:54:19 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:54:19.179052 | orchestrator | 2026-03-08 00:54:19 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:54:19.180192 | orchestrator | 2026-03-08 00:54:19 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:54:19.180221 | orchestrator | 2026-03-08 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:22.212404 | orchestrator | 2026-03-08 00:54:22 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:54:22.212824 | orchestrator | 2026-03-08 00:54:22 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:54:22.214511 | orchestrator | 2026-03-08 00:54:22 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:54:22.214576 | orchestrator | 2026-03-08 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:25.295336 | orchestrator | 2026-03-08 00:54:25 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:54:25.297552 | orchestrator | 2026-03-08 00:54:25 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:54:25.300486 | orchestrator | 2026-03-08 00:54:25 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:54:25.300897 | orchestrator | 2026-03-08 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:28.349220 | orchestrator | 2026-03-08 00:54:28 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:54:28.350641 | orchestrator | 2026-03-08 00:54:28 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:54:28.352311 | orchestrator | 2026-03-08 00:54:28 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:54:28.352652 | orchestrator | 2026-03-08 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:31.394001 | orchestrator | 2026-03-08 00:54:31 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:54:31.394936 | orchestrator | 2026-03-08 00:54:31 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:54:31.396909 | orchestrator | 2026-03-08 00:54:31 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:54:31.396950 | orchestrator | 2026-03-08 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:34.445515 | orchestrator | 2026-03-08 00:54:34 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:54:34.445585 | orchestrator | 2026-03-08 00:54:34 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:54:34.445591 | orchestrator | 2026-03-08 00:54:34 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:54:34.445596 | orchestrator | 2026-03-08 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:37.481009 | orchestrator | 2026-03-08 00:54:37 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:54:37.482664 | orchestrator | 2026-03-08 00:54:37 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:54:37.484757 | orchestrator | 2026-03-08 00:54:37 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:54:37.484826 | orchestrator | 2026-03-08 00:54:37 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:40.530883 | orchestrator | 2026-03-08 00:54:40 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:54:40.531426 | orchestrator | 2026-03-08 00:54:40 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:54:40.532633 | orchestrator | 2026-03-08 00:54:40 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:54:40.532664 | orchestrator | 2026-03-08 00:54:40 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:43.575165 | orchestrator | 2026-03-08 00:54:43 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:54:43.575265 | orchestrator | 2026-03-08 00:54:43 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:54:43.576700 | orchestrator | 2026-03-08 00:54:43 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:54:43.576761 | orchestrator | 2026-03-08 00:54:43 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:46.622782 | orchestrator | 2026-03-08 00:54:46 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:54:46.623928 | orchestrator | 2026-03-08 00:54:46 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:54:46.626898 | orchestrator | 2026-03-08 00:54:46 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:54:46.627152 | orchestrator | 2026-03-08 00:54:46 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:49.671334 | orchestrator | 2026-03-08 00:54:49 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:54:49.672410 | orchestrator | 2026-03-08 00:54:49 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:54:49.674431 | orchestrator | 2026-03-08 00:54:49 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:54:49.674504 | orchestrator | 2026-03-08 00:54:49 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:52.722358 | orchestrator | 2026-03-08 00:54:52 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:54:52.723916 | orchestrator | 2026-03-08 00:54:52 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:54:52.725060 | orchestrator | 2026-03-08 00:54:52 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:54:52.725085 | orchestrator | 2026-03-08 00:54:52 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:55.773511 | orchestrator | 2026-03-08 00:54:55 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:54:55.776234 | orchestrator | 2026-03-08 00:54:55 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:54:55.778184 | orchestrator | 2026-03-08 00:54:55 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:54:55.778229 | orchestrator | 2026-03-08 00:54:55 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:58.825501 | orchestrator | 2026-03-08 00:54:58 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:54:58.826164 | orchestrator | 2026-03-08 00:54:58 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:54:58.826828 | orchestrator | 2026-03-08 00:54:58 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:54:58.826861 | orchestrator | 2026-03-08 00:54:58 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:01.868119 | orchestrator | 2026-03-08 00:55:01 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:55:01.870237 | orchestrator | 2026-03-08 00:55:01 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:55:01.872788 | orchestrator | 2026-03-08 00:55:01 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:55:01.873799 | orchestrator | 2026-03-08 00:55:01 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:04.924548 | orchestrator | 2026-03-08 00:55:04 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:55:04.925194 | orchestrator | 2026-03-08 00:55:04 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:55:04.926571 | orchestrator | 2026-03-08 00:55:04 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:55:04.926635 | orchestrator | 2026-03-08 00:55:04 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:07.974756 | orchestrator | 2026-03-08 00:55:07 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:55:07.975681 | orchestrator | 2026-03-08 00:55:07 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:55:07.978219 | orchestrator | 2026-03-08 00:55:07 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:55:07.978269 | orchestrator | 2026-03-08 00:55:07 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:11.030830 | orchestrator | 2026-03-08 00:55:11 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:55:11.033983 | orchestrator | 2026-03-08 00:55:11 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:55:11.036927 | orchestrator | 2026-03-08 00:55:11 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:55:11.037005 | orchestrator | 2026-03-08 00:55:11 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:14.091864 | orchestrator | 2026-03-08 00:55:14 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:55:14.092021 | orchestrator | 2026-03-08 00:55:14 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:55:14.094984 | orchestrator | 2026-03-08 00:55:14 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:55:14.095247 | orchestrator | 2026-03-08 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:17.140762 | orchestrator | 2026-03-08 00:55:17 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:55:17.143048 | orchestrator | 2026-03-08 00:55:17 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:55:17.146455 | orchestrator | 2026-03-08 00:55:17 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:55:17.146522 | orchestrator | 2026-03-08 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:20.198246 | orchestrator | 2026-03-08 00:55:20 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:55:20.202159 | orchestrator | 2026-03-08 00:55:20 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:55:20.205103 | orchestrator | 2026-03-08 00:55:20 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:55:20.205175 | orchestrator | 2026-03-08 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:23.252342 | orchestrator | 2026-03-08 00:55:23 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:55:23.253925 | orchestrator | 2026-03-08 00:55:23 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:55:23.256667 | orchestrator | 2026-03-08 00:55:23 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:55:23.256704 | orchestrator | 2026-03-08 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:26.301126 | orchestrator | 2026-03-08 00:55:26 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:55:26.303412 | orchestrator | 2026-03-08 00:55:26 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:55:26.305388 | orchestrator | 2026-03-08 00:55:26 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:55:26.305581 | orchestrator | 2026-03-08 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:29.351737 | orchestrator | 2026-03-08 00:55:29 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:55:29.352317 | orchestrator | 2026-03-08 00:55:29 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:55:29.354178 | orchestrator | 2026-03-08 00:55:29 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:55:29.354484 | orchestrator | 2026-03-08 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:32.388683 | orchestrator | 2026-03-08 00:55:32 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:55:32.389628 | orchestrator | 2026-03-08 00:55:32 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:55:32.391027 | orchestrator | 2026-03-08 00:55:32 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:55:32.391079 | orchestrator | 2026-03-08 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:35.433561 | orchestrator | 2026-03-08 00:55:35 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:55:35.435234 | orchestrator | 2026-03-08 00:55:35 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:55:35.436580 | orchestrator | 2026-03-08 00:55:35 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:55:35.436625 | orchestrator | 2026-03-08 00:55:35 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:38.494621 | orchestrator | 2026-03-08 00:55:38 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:55:38.496071 | orchestrator | 2026-03-08 00:55:38 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:55:38.498587 | orchestrator | 2026-03-08 00:55:38 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:55:38.498811 | orchestrator | 2026-03-08 00:55:38 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:41.543704 | orchestrator | 2026-03-08 00:55:41 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:55:41.544564 | orchestrator | 2026-03-08 00:55:41 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:55:41.545854 | orchestrator | 2026-03-08 00:55:41 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:55:41.545913 | orchestrator | 2026-03-08 00:55:41 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:44.587462 | orchestrator | 2026-03-08 00:55:44 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:55:44.588705 | orchestrator | 2026-03-08 00:55:44 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:55:44.591431 | orchestrator | 2026-03-08 00:55:44 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:55:44.591711 | orchestrator | 2026-03-08 00:55:44 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:47.628234 | orchestrator | 2026-03-08 00:55:47 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:55:47.630154 | orchestrator | 2026-03-08 00:55:47 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:55:47.631732 | orchestrator | 2026-03-08 00:55:47 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:55:47.631790 | orchestrator | 2026-03-08 00:55:47 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:50.684999 | orchestrator | 2026-03-08 00:55:50 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:55:50.686445 | orchestrator | 2026-03-08 00:55:50 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:55:50.688736 | orchestrator | 2026-03-08 00:55:50 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:55:50.688841 | orchestrator | 2026-03-08 00:55:50 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:53.729028 | orchestrator | 2026-03-08 00:55:53 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:55:53.730245 | orchestrator | 2026-03-08 00:55:53 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:55:53.732735 | orchestrator | 2026-03-08 00:55:53 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:55:53.732781 | orchestrator | 2026-03-08 00:55:53 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:56.774320 | orchestrator | 2026-03-08 00:55:56 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:55:56.777090 | orchestrator | 2026-03-08 00:55:56 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:55:56.778299 | orchestrator | 2026-03-08 00:55:56 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:55:56.778345 | orchestrator | 2026-03-08 00:55:56 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:59.825149 | orchestrator | 2026-03-08 00:55:59 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:55:59.828310 | orchestrator | 2026-03-08 00:55:59 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:55:59.830533 | orchestrator | 2026-03-08 00:55:59 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:55:59.830626 | orchestrator | 2026-03-08 00:55:59 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:02.880281 | orchestrator | 2026-03-08 00:56:02 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:56:02.882551 | orchestrator | 2026-03-08 00:56:02 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:56:02.885579 | orchestrator | 2026-03-08 00:56:02 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:56:02.885647 | orchestrator | 2026-03-08 00:56:02 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:05.929635 | orchestrator | 2026-03-08 00:56:05 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state STARTED 2026-03-08 00:56:05.930704 | orchestrator | 2026-03-08 00:56:05 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:56:05.932595 | orchestrator | 2026-03-08 00:56:05 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:56:05.932638 | orchestrator | 2026-03-08 00:56:05 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:08.993178 | orchestrator | 2026-03-08 00:56:08 | INFO  | Task a6c169c9-910e-4adf-8c0b-c231f4a22a91 is in state SUCCESS 2026-03-08 00:56:08.995162 | orchestrator | 2026-03-08 00:56:08.995224 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-08 00:56:08.995231 | orchestrator | 2.16.14 2026-03-08 00:56:08.995237 | orchestrator | 2026-03-08 00:56:08.995243 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-08 00:56:08.995248 | orchestrator | 2026-03-08 00:56:08.995253 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-08 00:56:08.995258 | orchestrator | Sunday 08 March 2026 00:44:46 +0000 (0:00:00.763) 0:00:00.763 ********** 2026-03-08 00:56:08.995264 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:08.995270 | orchestrator | 2026-03-08 00:56:08.995275 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-08 00:56:08.995298 | orchestrator | Sunday 08 March 2026 00:44:47 +0000 (0:00:01.083) 0:00:01.847 ********** 2026-03-08 00:56:08.995303 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:08.995308 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:08.995313 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:08.995318 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:08.995322 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:08.995327 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:08.995331 | orchestrator | 2026-03-08 00:56:08.995336 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-08 00:56:08.995341 | orchestrator | Sunday 08 March 2026 00:44:49 +0000 (0:00:01.600) 0:00:03.447 ********** 2026-03-08 00:56:08.995405 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:08.995412 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:08.995417 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:08.995421 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:08.995426 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:08.995430 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:08.995435 | orchestrator | 2026-03-08 00:56:08.995439 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-08 00:56:08.995444 | orchestrator | Sunday 08 March 2026 00:44:50 +0000 (0:00:00.955) 0:00:04.403 ********** 2026-03-08 00:56:08.995449 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:08.995453 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:08.995458 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:08.995462 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:08.995467 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:08.995471 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:08.995476 | orchestrator | 2026-03-08 00:56:08.995480 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-08 00:56:08.995485 | orchestrator | Sunday 08 March 2026 00:44:51 +0000 (0:00:01.105) 0:00:05.508 ********** 2026-03-08 00:56:08.995489 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:08.995494 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:08.995499 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:08.995503 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:08.995508 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:08.995512 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:08.995517 | orchestrator | 2026-03-08 00:56:08.995615 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-08 00:56:08.995622 | orchestrator | Sunday 08 March 2026 00:44:52 +0000 (0:00:00.725) 0:00:06.233 ********** 2026-03-08 00:56:08.995627 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:08.995631 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:08.995636 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:08.995640 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:08.995645 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:08.995649 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:08.995654 | orchestrator | 2026-03-08 00:56:08.995659 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-08 00:56:08.995663 | orchestrator | Sunday 08 March 2026 00:44:52 +0000 (0:00:00.485) 0:00:06.719 ********** 2026-03-08 00:56:08.995668 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:08.995678 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:08.995683 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:08.995687 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:08.995692 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:08.995696 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:08.995701 | orchestrator | 2026-03-08 00:56:08.995705 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-08 00:56:08.995710 | orchestrator | Sunday 08 March 2026 00:44:53 +0000 (0:00:00.827) 0:00:07.546 ********** 2026-03-08 00:56:08.995714 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:08.995720 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:08.995724 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:08.995729 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:08.995733 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:08.995743 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:08.995748 | orchestrator | 2026-03-08 00:56:08.995762 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-08 00:56:08.995767 | orchestrator | Sunday 08 March 2026 00:44:54 +0000 (0:00:00.875) 0:00:08.422 ********** 2026-03-08 00:56:08.995771 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:08.995776 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:08.995780 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:08.995785 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:08.995789 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:08.995794 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:08.995798 | orchestrator | 2026-03-08 00:56:08.995803 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-08 00:56:08.995807 | orchestrator | Sunday 08 March 2026 00:44:55 +0000 (0:00:00.986) 0:00:09.409 ********** 2026-03-08 00:56:08.995812 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-08 00:56:08.995817 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-08 00:56:08.995821 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-08 00:56:08.995852 | orchestrator | 2026-03-08 00:56:08.995857 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-08 00:56:08.995862 | orchestrator | Sunday 08 March 2026 00:44:56 +0000 (0:00:00.867) 0:00:10.276 ********** 2026-03-08 00:56:08.995866 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:08.995871 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:08.995875 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:08.995898 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:08.995916 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:08.995921 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:08.995926 | orchestrator | 2026-03-08 00:56:08.995931 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-08 00:56:08.995935 | orchestrator | Sunday 08 March 2026 00:44:57 +0000 (0:00:01.490) 0:00:11.766 ********** 2026-03-08 00:56:08.995940 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-08 00:56:08.995945 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-08 00:56:08.995949 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-08 00:56:08.995954 | orchestrator | 2026-03-08 00:56:08.995958 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-08 00:56:08.995963 | orchestrator | Sunday 08 March 2026 00:45:01 +0000 (0:00:03.778) 0:00:15.545 ********** 2026-03-08 00:56:08.995968 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-08 00:56:08.995973 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-08 00:56:08.995978 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-08 00:56:08.995982 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:08.995987 | orchestrator | 2026-03-08 00:56:08.995991 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-08 00:56:08.995996 | orchestrator | Sunday 08 March 2026 00:45:02 +0000 (0:00:00.883) 0:00:16.429 ********** 2026-03-08 00:56:08.996001 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-08 00:56:08.996008 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-08 00:56:08.996013 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-08 00:56:08.996022 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:08.996027 | orchestrator | 2026-03-08 00:56:08.996032 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-08 00:56:08.996036 | orchestrator | Sunday 08 March 2026 00:45:04 +0000 (0:00:02.010) 0:00:18.439 ********** 2026-03-08 00:56:08.996043 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:08.996060 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:08.996073 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:08.996085 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:08.996127 | orchestrator | 2026-03-08 00:56:08.996134 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-08 00:56:08.996142 | orchestrator | Sunday 08 March 2026 00:45:05 +0000 (0:00:00.533) 0:00:18.972 ********** 2026-03-08 00:56:08.996160 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-08 00:44:58.477045', 'end': '2026-03-08 00:44:58.562306', 'delta': '0:00:00.085261', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-08 00:56:08.996170 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-08 00:44:59.866626', 'end': '2026-03-08 00:44:59.953185', 'delta': '0:00:00.086559', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-08 00:56:08.996179 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-08 00:45:01.320694', 'end': '2026-03-08 00:45:01.429155', 'delta': '0:00:00.108461', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-08 00:56:08.996193 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:08.996201 | orchestrator | 2026-03-08 00:56:08.996209 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-08 00:56:08.996216 | orchestrator | Sunday 08 March 2026 00:45:05 +0000 (0:00:00.229) 0:00:19.201 ********** 2026-03-08 00:56:08.996220 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:08.996225 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:08.996229 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:08.996259 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:08.996264 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:08.996268 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:08.996273 | orchestrator | 2026-03-08 00:56:08.996277 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-08 00:56:08.996282 | orchestrator | Sunday 08 March 2026 00:45:07 +0000 (0:00:02.312) 0:00:21.514 ********** 2026-03-08 00:56:08.996287 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-08 00:56:08.996291 | orchestrator | 2026-03-08 00:56:08.996296 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-08 00:56:08.996301 | orchestrator | Sunday 08 March 2026 00:45:08 +0000 (0:00:01.005) 0:00:22.520 ********** 2026-03-08 00:56:08.996305 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:08.996310 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:08.996314 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:08.996319 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:08.996323 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:08.996328 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:08.996332 | orchestrator | 2026-03-08 00:56:08.996337 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-08 00:56:08.996341 | orchestrator | Sunday 08 March 2026 00:45:11 +0000 (0:00:03.215) 0:00:25.735 ********** 2026-03-08 00:56:08.996345 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:08.996350 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:08.996354 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:08.996359 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:08.996363 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:08.996368 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:08.996372 | orchestrator | 2026-03-08 00:56:08.996381 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-08 00:56:08.996386 | orchestrator | Sunday 08 March 2026 00:45:13 +0000 (0:00:01.303) 0:00:27.039 ********** 2026-03-08 00:56:08.996390 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:08.996395 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:08.996399 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:08.996404 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:08.996412 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:08.996419 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:08.996427 | orchestrator | 2026-03-08 00:56:08.996437 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-08 00:56:08.996447 | orchestrator | Sunday 08 March 2026 00:45:14 +0000 (0:00:01.435) 0:00:28.474 ********** 2026-03-08 00:56:08.996458 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:08.996466 | orchestrator | 2026-03-08 00:56:08.996474 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-08 00:56:08.996582 | orchestrator | Sunday 08 March 2026 00:45:14 +0000 (0:00:00.316) 0:00:28.791 ********** 2026-03-08 00:56:08.996591 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:08.996599 | orchestrator | 2026-03-08 00:56:08.996606 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-08 00:56:08.996614 | orchestrator | Sunday 08 March 2026 00:45:15 +0000 (0:00:00.206) 0:00:28.997 ********** 2026-03-08 00:56:08.996629 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:08.996637 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:08.996664 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:08.996683 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:08.996691 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:08.996698 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:08.996705 | orchestrator | 2026-03-08 00:56:08.996718 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-08 00:56:08.996730 | orchestrator | Sunday 08 March 2026 00:45:16 +0000 (0:00:00.980) 0:00:29.978 ********** 2026-03-08 00:56:08.996784 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:08.996794 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:08.996801 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:08.996808 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:08.996817 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:08.996848 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:08.996857 | orchestrator | 2026-03-08 00:56:08.996871 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-08 00:56:08.996879 | orchestrator | Sunday 08 March 2026 00:45:16 +0000 (0:00:00.949) 0:00:30.928 ********** 2026-03-08 00:56:08.996887 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:08.996893 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:08.996900 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:08.996908 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:08.996915 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:08.996921 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:08.996928 | orchestrator | 2026-03-08 00:56:08.996935 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-08 00:56:08.996943 | orchestrator | Sunday 08 March 2026 00:45:17 +0000 (0:00:00.610) 0:00:31.538 ********** 2026-03-08 00:56:08.996951 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:08.996959 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:08.996967 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:08.997039 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:08.997049 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:08.997057 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:08.997065 | orchestrator | 2026-03-08 00:56:08.997073 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-08 00:56:08.997083 | orchestrator | Sunday 08 March 2026 00:45:18 +0000 (0:00:01.037) 0:00:32.575 ********** 2026-03-08 00:56:08.997092 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:08.997100 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:08.997113 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:08.997123 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:08.997130 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:08.997138 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:08.997146 | orchestrator | 2026-03-08 00:56:08.997155 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-08 00:56:08.997163 | orchestrator | Sunday 08 March 2026 00:45:19 +0000 (0:00:00.770) 0:00:33.346 ********** 2026-03-08 00:56:08.997171 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:08.997180 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:08.997188 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:08.997196 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:08.997204 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:08.997213 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:08.997225 | orchestrator | 2026-03-08 00:56:08.997234 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-08 00:56:08.997244 | orchestrator | Sunday 08 March 2026 00:45:20 +0000 (0:00:01.049) 0:00:34.395 ********** 2026-03-08 00:56:08.997252 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:08.997261 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:08.997282 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:08.997292 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:08.997300 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:08.997308 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:08.997419 | orchestrator | 2026-03-08 00:56:08.997436 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-08 00:56:08.997446 | orchestrator | Sunday 08 March 2026 00:45:21 +0000 (0:00:00.835) 0:00:35.231 ********** 2026-03-08 00:56:08.997468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d02f715b--f6fc--5dd9--afa3--4d404d1973db-osd--block--d02f715b--f6fc--5dd9--afa3--4d404d1973db', 'dm-uuid-LVM-f03kY5XdcO8KIjPmgU6ez8t0FLA66q5e6bg790Rq5xMganTUcZGHGUvDXtiPEuVk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.997483 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--06971c7f--d1d9--5519--989d--752a08544c4e-osd--block--06971c7f--d1d9--5519--989d--752a08544c4e', 'dm-uuid-LVM-aBixsF7VHwJvWC9cdwUNtNJgwkKQp0oeNIuSXWBWS1FFfMSG9j3hPuZReyvMCd3n'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.997519 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.997535 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.997544 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.997553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.997562 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.997581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.997590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.997634 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.997660 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6', 'scsi-SQEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part1', 'scsi-SQEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part14', 'scsi-SQEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part15', 'scsi-SQEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part16', 'scsi-SQEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:56:08.997674 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d02f715b--f6fc--5dd9--afa3--4d404d1973db-osd--block--d02f715b--f6fc--5dd9--afa3--4d404d1973db'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mrAEW5-XpgO-ylIk-3aJm-Tg5F-lqm3-bQSDp1', 'scsi-0QEMU_QEMU_HARDDISK_9d9faa04-2f3c-436d-9a5f-1631de10dde0', 'scsi-SQEMU_QEMU_HARDDISK_9d9faa04-2f3c-436d-9a5f-1631de10dde0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:56:08.997691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--06971c7f--d1d9--5519--989d--752a08544c4e-osd--block--06971c7f--d1d9--5519--989d--752a08544c4e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CoIbeO-WF0K-M7eU-N2ox-nLCt-t6XQ-gHHAOC', 'scsi-0QEMU_QEMU_HARDDISK_13127977-6a78-466e-81ef-45b79edafbaf', 'scsi-SQEMU_QEMU_HARDDISK_13127977-6a78-466e-81ef-45b79edafbaf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:56:08.997713 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f1e8e18-dbd4-42e7-a856-4b9aa5f72ffe', 'scsi-SQEMU_QEMU_HARDDISK_4f1e8e18-dbd4-42e7-a856-4b9aa5f72ffe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:56:08.997724 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:56:08.997732 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:08.997747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a9457a91--34ca--5e42--9332--0f1ee38194fb-osd--block--a9457a91--34ca--5e42--9332--0f1ee38194fb', 'dm-uuid-LVM-1DgtEGOZqDrAYsIUYWXjWt4e3SxVmhLmzrC21Cb8uHjcZdNtfE2b9sZFbwNam0np'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.997756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ccaad6c6--3747--58dc--9b51--af637ea3a93d-osd--block--ccaad6c6--3747--58dc--9b51--af637ea3a93d', 'dm-uuid-LVM-aqoktTFUlq7SjIJKcG7i1ikNBv383ZINRI52RtFLOJBuoXIuLDsmN9zlb65VZXV7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.997765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9742d483--d5c0--528b--aa0f--657894200b45-osd--block--9742d483--d5c0--528b--aa0f--657894200b45', 'dm-uuid-LVM-U2QUDxGDRC151Udr4jM5hfm2YaN283x19epxysF51M2bfpRRaRQBoYxHcYR9gtnr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.997780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.997789 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.997802 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e5322502--cf2a--5eb6--8fcb--1a734f718f57-osd--block--e5322502--cf2a--5eb6--8fcb--1a734f718f57', 'dm-uuid-LVM-RRaMeRIXPIlbqQADcFEr6dO8YwR5B90PKNztrD7g57c5m6jUbbYIAolqfQ3zFJpa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.997811 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.997818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.997881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.997897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.997904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.997912 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.997929 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.997936 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.997943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.999533 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633', 'scsi-SQEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part1', 'scsi-SQEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part14', 'scsi-SQEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part15', 'scsi-SQEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part16', 'scsi-SQEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:56:08.999610 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.999638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a9457a91--34ca--5e42--9332--0f1ee38194fb-osd--block--a9457a91--34ca--5e42--9332--0f1ee38194fb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AyfzLZ-nDAT-KP8U-BC7i-9Gme-sH3R-MKWXQG', 'scsi-0QEMU_QEMU_HARDDISK_2c822381-711e-4b88-8f0f-ccd9d68009a2', 'scsi-SQEMU_QEMU_HARDDISK_2c822381-711e-4b88-8f0f-ccd9d68009a2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:56:08.999649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ccaad6c6--3747--58dc--9b51--af637ea3a93d-osd--block--ccaad6c6--3747--58dc--9b51--af637ea3a93d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HcH3QJ-5LsI-kc2v-MoPJ-2a34-l4rS-3VHXB9', 'scsi-0QEMU_QEMU_HARDDISK_272aa0da-7148-40c6-996c-fa485e579a0c', 'scsi-SQEMU_QEMU_HARDDISK_272aa0da-7148-40c6-996c-fa485e579a0c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:56:08.999664 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.999674 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_584a8cd2-f1cf-4783-b73b-bdfda5fabfa8', 'scsi-SQEMU_QEMU_HARDDISK_584a8cd2-f1cf-4783-b73b-bdfda5fabfa8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:56:08.999737 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:56:08.999750 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.999758 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.999776 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.999789 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7', 'scsi-SQEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part1', 'scsi-SQEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part14', 'scsi-SQEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part15', 'scsi-SQEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part16', 'scsi-SQEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:56:08.999856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.999866 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9742d483--d5c0--528b--aa0f--657894200b45-osd--block--9742d483--d5c0--528b--aa0f--657894200b45'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oflPUY-L2CM-wtDn-8Yeo-R4dI-ZTmC-cIevDQ', 'scsi-0QEMU_QEMU_HARDDISK_875b6ffb-6cc0-40cb-be90-c8d29b416698', 'scsi-SQEMU_QEMU_HARDDISK_875b6ffb-6cc0-40cb-be90-c8d29b416698'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:56:08.999880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.999887 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e5322502--cf2a--5eb6--8fcb--1a734f718f57-osd--block--e5322502--cf2a--5eb6--8fcb--1a734f718f57'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fFbuCy-h99o-f0ck-Xj07-2du6-A3pz-GYkTZk', 'scsi-0QEMU_QEMU_HARDDISK_09c0cde5-d1e2-470c-ab2a-905eda1e5751', 'scsi-SQEMU_QEMU_HARDDISK_09c0cde5-d1e2-470c-ab2a-905eda1e5751'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:56:08.999895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3f7b7f4-f798-492f-86ac-7ce39be70087', 'scsi-SQEMU_QEMU_HARDDISK_c3f7b7f4-f798-492f-86ac-7ce39be70087'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:56:08.999907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.999915 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:56:08.999924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.999965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.999975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:08.999990 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:09.000017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:09.000030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf175d06-491f-4ffa-8a22-9754e6fb303e', 'scsi-SQEMU_QEMU_HARDDISK_bf175d06-491f-4ffa-8a22-9754e6fb303e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf175d06-491f-4ffa-8a22-9754e6fb303e-part1', 'scsi-SQEMU_QEMU_HARDDISK_bf175d06-491f-4ffa-8a22-9754e6fb303e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf175d06-491f-4ffa-8a22-9754e6fb303e-part14', 'scsi-SQEMU_QEMU_HARDDISK_bf175d06-491f-4ffa-8a22-9754e6fb303e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf175d06-491f-4ffa-8a22-9754e6fb303e-part15', 'scsi-SQEMU_QEMU_HARDDISK_bf175d06-491f-4ffa-8a22-9754e6fb303e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf175d06-491f-4ffa-8a22-9754e6fb303e-part16', 'scsi-SQEMU_QEMU_HARDDISK_bf175d06-491f-4ffa-8a22-9754e6fb303e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:56:09.000078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:56:09.000089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:09.000103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:09.000111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:09.000119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:09.000127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:09.000134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:09.000146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:09.000155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:09.000195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e2d3775-a8c9-4c11-b3d1-42f82657682c', 'scsi-SQEMU_QEMU_HARDDISK_8e2d3775-a8c9-4c11-b3d1-42f82657682c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e2d3775-a8c9-4c11-b3d1-42f82657682c-part1', 'scsi-SQEMU_QEMU_HARDDISK_8e2d3775-a8c9-4c11-b3d1-42f82657682c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e2d3775-a8c9-4c11-b3d1-42f82657682c-part14', 'scsi-SQEMU_QEMU_HARDDISK_8e2d3775-a8c9-4c11-b3d1-42f82657682c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e2d3775-a8c9-4c11-b3d1-42f82657682c-part15', 'scsi-SQEMU_QEMU_HARDDISK_8e2d3775-a8c9-4c11-b3d1-42f82657682c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e2d3775-a8c9-4c11-b3d1-42f82657682c-part16', 'scsi-SQEMU_QEMU_HARDDISK_8e2d3775-a8c9-4c11-b3d1-42f82657682c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:56:09.000629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-03-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:56:09.000639 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.000647 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.000654 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.000662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:09.000684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:09.000692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:09.000700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:09.000770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:09.000781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:09.000789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:09.000797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:56:09.000805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0df70171-b12b-4f0f-b69e-ca5c94cd8fa7', 'scsi-SQEMU_QEMU_HARDDISK_0df70171-b12b-4f0f-b69e-ca5c94cd8fa7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0df70171-b12b-4f0f-b69e-ca5c94cd8fa7-part1', 'scsi-SQEMU_QEMU_HARDDISK_0df70171-b12b-4f0f-b69e-ca5c94cd8fa7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0df70171-b12b-4f0f-b69e-ca5c94cd8fa7-part14', 'scsi-SQEMU_QEMU_HARDDISK_0df70171-b12b-4f0f-b69e-ca5c94cd8fa7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0df70171-b12b-4f0f-b69e-ca5c94cd8fa7-part15', 'scsi-SQEMU_QEMU_HARDDISK_0df70171-b12b-4f0f-b69e-ca5c94cd8fa7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0df70171-b12b-4f0f-b69e-ca5c94cd8fa7-part16', 'scsi-SQEMU_QEMU_HARDDISK_0df70171-b12b-4f0f-b69e-ca5c94cd8fa7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:56:09.000874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:56:09.000893 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.000901 | orchestrator | 2026-03-08 00:56:09.000946 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-08 00:56:09.000955 | orchestrator | Sunday 08 March 2026 00:45:22 +0000 (0:00:01.608) 0:00:36.839 ********** 2026-03-08 00:56:09.000964 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d02f715b--f6fc--5dd9--afa3--4d404d1973db-osd--block--d02f715b--f6fc--5dd9--afa3--4d404d1973db', 'dm-uuid-LVM-f03kY5XdcO8KIjPmgU6ez8t0FLA66q5e6bg790Rq5xMganTUcZGHGUvDXtiPEuVk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.000973 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--06971c7f--d1d9--5519--989d--752a08544c4e-osd--block--06971c7f--d1d9--5519--989d--752a08544c4e', 'dm-uuid-LVM-aBixsF7VHwJvWC9cdwUNtNJgwkKQp0oeNIuSXWBWS1FFfMSG9j3hPuZReyvMCd3n'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.000981 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001069 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001095 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001174 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001184 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001191 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001198 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001210 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a9457a91--34ca--5e42--9332--0f1ee38194fb-osd--block--a9457a91--34ca--5e42--9332--0f1ee38194fb', 'dm-uuid-LVM-1DgtEGOZqDrAYsIUYWXjWt4e3SxVmhLmzrC21Cb8uHjcZdNtfE2b9sZFbwNam0np'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001226 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001298 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6', 'scsi-SQEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part1', 'scsi-SQEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part14', 'scsi-SQEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part15', 'scsi-SQEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part16', 'scsi-SQEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001312 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d02f715b--f6fc--5dd9--afa3--4d404d1973db-osd--block--d02f715b--f6fc--5dd9--afa3--4d404d1973db'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mrAEW5-XpgO-ylIk-3aJm-Tg5F-lqm3-bQSDp1', 'scsi-0QEMU_QEMU_HARDDISK_9d9faa04-2f3c-436d-9a5f-1631de10dde0', 'scsi-SQEMU_QEMU_HARDDISK_9d9faa04-2f3c-436d-9a5f-1631de10dde0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001326 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--06971c7f--d1d9--5519--989d--752a08544c4e-osd--block--06971c7f--d1d9--5519--989d--752a08544c4e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CoIbeO-WF0K-M7eU-N2ox-nLCt-t6XQ-gHHAOC', 'scsi-0QEMU_QEMU_HARDDISK_13127977-6a78-466e-81ef-45b79edafbaf', 'scsi-SQEMU_QEMU_HARDDISK_13127977-6a78-466e-81ef-45b79edafbaf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001420 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ccaad6c6--3747--58dc--9b51--af637ea3a93d-osd--block--ccaad6c6--3747--58dc--9b51--af637ea3a93d', 'dm-uuid-LVM-aqoktTFUlq7SjIJKcG7i1ikNBv383ZINRI52RtFLOJBuoXIuLDsmN9zlb65VZXV7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001433 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f1e8e18-dbd4-42e7-a856-4b9aa5f72ffe', 'scsi-SQEMU_QEMU_HARDDISK_4f1e8e18-dbd4-42e7-a856-4b9aa5f72ffe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001441 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001450 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001462 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001477 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001485 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.001541 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001551 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001559 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001567 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001575 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9742d483--d5c0--528b--aa0f--657894200b45-osd--block--9742d483--d5c0--528b--aa0f--657894200b45', 'dm-uuid-LVM-U2QUDxGDRC151Udr4jM5hfm2YaN283x19epxysF51M2bfpRRaRQBoYxHcYR9gtnr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001592 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e5322502--cf2a--5eb6--8fcb--1a734f718f57-osd--block--e5322502--cf2a--5eb6--8fcb--1a734f718f57', 'dm-uuid-LVM-RRaMeRIXPIlbqQADcFEr6dO8YwR5B90PKNztrD7g57c5m6jUbbYIAolqfQ3zFJpa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001644 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001656 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001664 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001677 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633', 'scsi-SQEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part1', 'scsi-SQEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part14', 'scsi-SQEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part15', 'scsi-SQEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part16', 'scsi-SQEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001738 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001748 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a9457a91--34ca--5e42--9332--0f1ee38194fb-osd--block--a9457a91--34ca--5e42--9332--0f1ee38194fb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AyfzLZ-nDAT-KP8U-BC7i-9Gme-sH3R-MKWXQG', 'scsi-0QEMU_QEMU_HARDDISK_2c822381-711e-4b88-8f0f-ccd9d68009a2', 'scsi-SQEMU_QEMU_HARDDISK_2c822381-711e-4b88-8f0f-ccd9d68009a2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001757 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001764 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ccaad6c6--3747--58dc--9b51--af637ea3a93d-osd--block--ccaad6c6--3747--58dc--9b51--af637ea3a93d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HcH3QJ-5LsI-kc2v-MoPJ-2a34-l4rS-3VHXB9', 'scsi-0QEMU_QEMU_HARDDISK_272aa0da-7148-40c6-996c-fa485e579a0c', 'scsi-SQEMU_QEMU_HARDDISK_272aa0da-7148-40c6-996c-fa485e579a0c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001786 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_584a8cd2-f1cf-4783-b73b-bdfda5fabfa8', 'scsi-SQEMU_QEMU_HARDDISK_584a8cd2-f1cf-4783-b73b-bdfda5fabfa8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001921 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001937 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001946 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001954 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001963 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001982 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.001988 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002108 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002125 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002140 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7', 'scsi-SQEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part1', 'scsi-SQEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part14', 'scsi-SQEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part15', 'scsi-SQEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part16', 'scsi-SQEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002156 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002207 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002217 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002224 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.002233 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9742d483--d5c0--528b--aa0f--657894200b45-osd--block--9742d483--d5c0--528b--aa0f--657894200b45'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oflPUY-L2CM-wtDn-8Yeo-R4dI-ZTmC-cIevDQ', 'scsi-0QEMU_QEMU_HARDDISK_875b6ffb-6cc0-40cb-be90-c8d29b416698', 'scsi-SQEMU_QEMU_HARDDISK_875b6ffb-6cc0-40cb-be90-c8d29b416698'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002242 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002298 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf175d06-491f-4ffa-8a22-9754e6fb303e', 'scsi-SQEMU_QEMU_HARDDISK_bf175d06-491f-4ffa-8a22-9754e6fb303e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf175d06-491f-4ffa-8a22-9754e6fb303e-part1', 'scsi-SQEMU_QEMU_HARDDISK_bf175d06-491f-4ffa-8a22-9754e6fb303e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf175d06-491f-4ffa-8a22-9754e6fb303e-part14', 'scsi-SQEMU_QEMU_HARDDISK_bf175d06-491f-4ffa-8a22-9754e6fb303e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf175d06-491f-4ffa-8a22-9754e6fb303e-part15', 'scsi-SQEMU_QEMU_HARDDISK_bf175d06-491f-4ffa-8a22-9754e6fb303e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf175d06-491f-4ffa-8a22-9754e6fb303e-part16', 'scsi-SQEMU_QEMU_HARDDISK_bf175d06-491f-4ffa-8a22-9754e6fb303e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002309 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e5322502--cf2a--5eb6--8fcb--1a734f718f57-osd--block--e5322502--cf2a--5eb6--8fcb--1a734f718f57'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fFbuCy-h99o-f0ck-Xj07-2du6-A3pz-GYkTZk', 'scsi-0QEMU_QEMU_HARDDISK_09c0cde5-d1e2-470c-ab2a-905eda1e5751', 'scsi-SQEMU_QEMU_HARDDISK_09c0cde5-d1e2-470c-ab2a-905eda1e5751'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002318 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002378 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3f7b7f4-f798-492f-86ac-7ce39be70087', 'scsi-SQEMU_QEMU_HARDDISK_c3f7b7f4-f798-492f-86ac-7ce39be70087'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002460 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002472 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002480 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002488 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002502 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002515 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002523 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002602 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002612 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002624 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e2d3775-a8c9-4c11-b3d1-42f82657682c', 'scsi-SQEMU_QEMU_HARDDISK_8e2d3775-a8c9-4c11-b3d1-42f82657682c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e2d3775-a8c9-4c11-b3d1-42f82657682c-part1', 'scsi-SQEMU_QEMU_HARDDISK_8e2d3775-a8c9-4c11-b3d1-42f82657682c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e2d3775-a8c9-4c11-b3d1-42f82657682c-part14', 'scsi-SQEMU_QEMU_HARDDISK_8e2d3775-a8c9-4c11-b3d1-42f82657682c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e2d3775-a8c9-4c11-b3d1-42f82657682c-part15', 'scsi-SQEMU_QEMU_HARDDISK_8e2d3775-a8c9-4c11-b3d1-42f82657682c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e2d3775-a8c9-4c11-b3d1-42f82657682c-part16', 'scsi-SQEMU_QEMU_HARDDISK_8e2d3775-a8c9-4c11-b3d1-42f82657682c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002638 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-03-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002703 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.002713 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.002720 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.002727 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002734 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002741 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002754 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002765 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002772 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002821 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002849 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002863 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0df70171-b12b-4f0f-b69e-ca5c94cd8fa7', 'scsi-SQEMU_QEMU_HARDDISK_0df70171-b12b-4f0f-b69e-ca5c94cd8fa7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0df70171-b12b-4f0f-b69e-ca5c94cd8fa7-part1', 'scsi-SQEMU_QEMU_HARDDISK_0df70171-b12b-4f0f-b69e-ca5c94cd8fa7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0df70171-b12b-4f0f-b69e-ca5c94cd8fa7-part14', 'scsi-SQEMU_QEMU_HARDDISK_0df70171-b12b-4f0f-b69e-ca5c94cd8fa7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0df70171-b12b-4f0f-b69e-ca5c94cd8fa7-part15', 'scsi-SQEMU_QEMU_HARDDISK_0df70171-b12b-4f0f-b69e-ca5c94cd8fa7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0df70171-b12b-4f0f-b69e-ca5c94cd8fa7-part16', 'scsi-SQEMU_QEMU_HARDDISK_0df70171-b12b-4f0f-b69e-ca5c94cd8fa7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002876 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:56:09.002883 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.002890 | orchestrator | 2026-03-08 00:56:09.002946 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-08 00:56:09.002957 | orchestrator | Sunday 08 March 2026 00:45:25 +0000 (0:00:02.180) 0:00:39.020 ********** 2026-03-08 00:56:09.002978 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.002985 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.002992 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.002999 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.003005 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.003012 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.003018 | orchestrator | 2026-03-08 00:56:09.003026 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-08 00:56:09.003033 | orchestrator | Sunday 08 March 2026 00:45:27 +0000 (0:00:02.019) 0:00:41.040 ********** 2026-03-08 00:56:09.003039 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.003046 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.003052 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.003059 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.003066 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.003073 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.003080 | orchestrator | 2026-03-08 00:56:09.003087 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-08 00:56:09.003095 | orchestrator | Sunday 08 March 2026 00:45:27 +0000 (0:00:00.811) 0:00:41.851 ********** 2026-03-08 00:56:09.003106 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.003110 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.003114 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.003118 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.003122 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.003125 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.003130 | orchestrator | 2026-03-08 00:56:09.003136 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-08 00:56:09.003142 | orchestrator | Sunday 08 March 2026 00:45:28 +0000 (0:00:01.011) 0:00:42.863 ********** 2026-03-08 00:56:09.003149 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.003159 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.003166 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.003172 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.003178 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.003184 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.003191 | orchestrator | 2026-03-08 00:56:09.003197 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-08 00:56:09.003203 | orchestrator | Sunday 08 March 2026 00:45:29 +0000 (0:00:00.824) 0:00:43.687 ********** 2026-03-08 00:56:09.003209 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.003214 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.003221 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.003227 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.003234 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.003240 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.003247 | orchestrator | 2026-03-08 00:56:09.003253 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-08 00:56:09.003261 | orchestrator | Sunday 08 March 2026 00:45:30 +0000 (0:00:01.009) 0:00:44.696 ********** 2026-03-08 00:56:09.003280 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.003286 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.003293 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.003300 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.003306 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.003313 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.003319 | orchestrator | 2026-03-08 00:56:09.003326 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-08 00:56:09.003333 | orchestrator | Sunday 08 March 2026 00:45:32 +0000 (0:00:01.544) 0:00:46.241 ********** 2026-03-08 00:56:09.003340 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-08 00:56:09.003348 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-08 00:56:09.003355 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-08 00:56:09.003361 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-08 00:56:09.003368 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-08 00:56:09.003374 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-08 00:56:09.003381 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-08 00:56:09.003388 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-08 00:56:09.003395 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-08 00:56:09.003401 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-08 00:56:09.003409 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-08 00:56:09.003421 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-08 00:56:09.003428 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-08 00:56:09.003435 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-08 00:56:09.003442 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-08 00:56:09.003448 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-08 00:56:09.003455 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-08 00:56:09.003468 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-08 00:56:09.003474 | orchestrator | 2026-03-08 00:56:09.003481 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-08 00:56:09.003488 | orchestrator | Sunday 08 March 2026 00:45:36 +0000 (0:00:04.338) 0:00:50.580 ********** 2026-03-08 00:56:09.003495 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-08 00:56:09.003501 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-08 00:56:09.003507 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-08 00:56:09.003514 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.003521 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-08 00:56:09.003527 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-08 00:56:09.003534 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-08 00:56:09.003540 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-08 00:56:09.003586 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-08 00:56:09.003593 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-08 00:56:09.003600 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.003607 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-08 00:56:09.003614 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-08 00:56:09.003621 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-08 00:56:09.003628 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.003634 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-08 00:56:09.003641 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-08 00:56:09.003648 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-08 00:56:09.003655 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.003661 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.003668 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-08 00:56:09.003675 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-08 00:56:09.003681 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-08 00:56:09.003688 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.003694 | orchestrator | 2026-03-08 00:56:09.003701 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-08 00:56:09.003708 | orchestrator | Sunday 08 March 2026 00:45:38 +0000 (0:00:01.897) 0:00:52.478 ********** 2026-03-08 00:56:09.003715 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.003721 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.003728 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.003735 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:56:09.003742 | orchestrator | 2026-03-08 00:56:09.003749 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-08 00:56:09.003757 | orchestrator | Sunday 08 March 2026 00:45:40 +0000 (0:00:01.851) 0:00:54.329 ********** 2026-03-08 00:56:09.003764 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.003770 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.003776 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.003783 | orchestrator | 2026-03-08 00:56:09.003790 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-08 00:56:09.003796 | orchestrator | Sunday 08 March 2026 00:45:41 +0000 (0:00:00.781) 0:00:55.111 ********** 2026-03-08 00:56:09.003803 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.003810 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.003817 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.003848 | orchestrator | 2026-03-08 00:56:09.003855 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-08 00:56:09.003868 | orchestrator | Sunday 08 March 2026 00:45:41 +0000 (0:00:00.362) 0:00:55.474 ********** 2026-03-08 00:56:09.003874 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.003880 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.003885 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.003891 | orchestrator | 2026-03-08 00:56:09.003897 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-08 00:56:09.003904 | orchestrator | Sunday 08 March 2026 00:45:42 +0000 (0:00:00.822) 0:00:56.296 ********** 2026-03-08 00:56:09.003911 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.003918 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.003924 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.003931 | orchestrator | 2026-03-08 00:56:09.003938 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-08 00:56:09.003944 | orchestrator | Sunday 08 March 2026 00:45:43 +0000 (0:00:00.725) 0:00:57.022 ********** 2026-03-08 00:56:09.003950 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:56:09.003957 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:56:09.003964 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:56:09.003970 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.003977 | orchestrator | 2026-03-08 00:56:09.003984 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-08 00:56:09.003990 | orchestrator | Sunday 08 March 2026 00:45:43 +0000 (0:00:00.504) 0:00:57.527 ********** 2026-03-08 00:56:09.003997 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:56:09.004009 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:56:09.004016 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:56:09.004022 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.004028 | orchestrator | 2026-03-08 00:56:09.004035 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-08 00:56:09.004041 | orchestrator | Sunday 08 March 2026 00:45:44 +0000 (0:00:00.425) 0:00:57.952 ********** 2026-03-08 00:56:09.004048 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:56:09.004055 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:56:09.004061 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:56:09.004068 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.004074 | orchestrator | 2026-03-08 00:56:09.004081 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-08 00:56:09.004088 | orchestrator | Sunday 08 March 2026 00:45:44 +0000 (0:00:00.499) 0:00:58.451 ********** 2026-03-08 00:56:09.004094 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.004101 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.004108 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.004114 | orchestrator | 2026-03-08 00:56:09.004121 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-08 00:56:09.004128 | orchestrator | Sunday 08 March 2026 00:45:44 +0000 (0:00:00.325) 0:00:58.777 ********** 2026-03-08 00:56:09.004135 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-08 00:56:09.004142 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-08 00:56:09.004176 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-08 00:56:09.004183 | orchestrator | 2026-03-08 00:56:09.004190 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-08 00:56:09.004196 | orchestrator | Sunday 08 March 2026 00:45:46 +0000 (0:00:01.375) 0:01:00.152 ********** 2026-03-08 00:56:09.004203 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-08 00:56:09.004210 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-08 00:56:09.004217 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-08 00:56:09.004231 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-08 00:56:09.004238 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-08 00:56:09.004244 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-08 00:56:09.004251 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-08 00:56:09.004258 | orchestrator | 2026-03-08 00:56:09.004264 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-08 00:56:09.004270 | orchestrator | Sunday 08 March 2026 00:45:47 +0000 (0:00:00.840) 0:01:00.993 ********** 2026-03-08 00:56:09.004277 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-08 00:56:09.004283 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-08 00:56:09.004289 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-08 00:56:09.004296 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-08 00:56:09.004303 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-08 00:56:09.004309 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-08 00:56:09.004316 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-08 00:56:09.004322 | orchestrator | 2026-03-08 00:56:09.004329 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-08 00:56:09.004336 | orchestrator | Sunday 08 March 2026 00:45:49 +0000 (0:00:02.270) 0:01:03.263 ********** 2026-03-08 00:56:09.004344 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:09.004352 | orchestrator | 2026-03-08 00:56:09.004359 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-08 00:56:09.004365 | orchestrator | Sunday 08 March 2026 00:45:50 +0000 (0:00:01.458) 0:01:04.722 ********** 2026-03-08 00:56:09.004372 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:09.004378 | orchestrator | 2026-03-08 00:56:09.004384 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-08 00:56:09.004391 | orchestrator | Sunday 08 March 2026 00:45:52 +0000 (0:00:01.261) 0:01:05.984 ********** 2026-03-08 00:56:09.004398 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.004404 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.004411 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.004417 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.004424 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.004430 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.004437 | orchestrator | 2026-03-08 00:56:09.004444 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-08 00:56:09.004450 | orchestrator | Sunday 08 March 2026 00:45:53 +0000 (0:00:01.098) 0:01:07.082 ********** 2026-03-08 00:56:09.004457 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.004463 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.004469 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.004475 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.004481 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.004487 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.004493 | orchestrator | 2026-03-08 00:56:09.004504 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-08 00:56:09.004510 | orchestrator | Sunday 08 March 2026 00:45:53 +0000 (0:00:00.841) 0:01:07.923 ********** 2026-03-08 00:56:09.004517 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.004523 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.004529 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.004540 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.004546 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.004552 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.004559 | orchestrator | 2026-03-08 00:56:09.004565 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-08 00:56:09.004571 | orchestrator | Sunday 08 March 2026 00:45:56 +0000 (0:00:02.703) 0:01:10.627 ********** 2026-03-08 00:56:09.004578 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.004584 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.004590 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.004596 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.004603 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.004609 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.004615 | orchestrator | 2026-03-08 00:56:09.004621 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-08 00:56:09.004628 | orchestrator | Sunday 08 March 2026 00:45:58 +0000 (0:00:02.234) 0:01:12.861 ********** 2026-03-08 00:56:09.004693 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.004698 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.004702 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.004705 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.004709 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.004726 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.004731 | orchestrator | 2026-03-08 00:56:09.004735 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-08 00:56:09.004739 | orchestrator | Sunday 08 March 2026 00:46:00 +0000 (0:00:01.497) 0:01:14.359 ********** 2026-03-08 00:56:09.004744 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.004750 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.004755 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.004763 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.004773 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.004779 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.004785 | orchestrator | 2026-03-08 00:56:09.004791 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-08 00:56:09.004798 | orchestrator | Sunday 08 March 2026 00:46:01 +0000 (0:00:00.728) 0:01:15.088 ********** 2026-03-08 00:56:09.004804 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.004811 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.004817 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.004871 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.004880 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.004886 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.004892 | orchestrator | 2026-03-08 00:56:09.004897 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-08 00:56:09.004903 | orchestrator | Sunday 08 March 2026 00:46:02 +0000 (0:00:01.093) 0:01:16.182 ********** 2026-03-08 00:56:09.004908 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.004914 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.004919 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.004925 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.004930 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.004935 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.004940 | orchestrator | 2026-03-08 00:56:09.004946 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-08 00:56:09.004952 | orchestrator | Sunday 08 March 2026 00:46:04 +0000 (0:00:02.139) 0:01:18.321 ********** 2026-03-08 00:56:09.004957 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.004962 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.004969 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.004976 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.004981 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.004987 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.004993 | orchestrator | 2026-03-08 00:56:09.004999 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-08 00:56:09.005015 | orchestrator | Sunday 08 March 2026 00:46:06 +0000 (0:00:02.126) 0:01:20.448 ********** 2026-03-08 00:56:09.005023 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.005028 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.005035 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.005041 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.005047 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.005053 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.005060 | orchestrator | 2026-03-08 00:56:09.005067 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-08 00:56:09.005074 | orchestrator | Sunday 08 March 2026 00:46:07 +0000 (0:00:00.752) 0:01:21.201 ********** 2026-03-08 00:56:09.005080 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.005087 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.005093 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.005099 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.005104 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.005110 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.005116 | orchestrator | 2026-03-08 00:56:09.005123 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-08 00:56:09.005128 | orchestrator | Sunday 08 March 2026 00:46:08 +0000 (0:00:01.205) 0:01:22.406 ********** 2026-03-08 00:56:09.005134 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.005139 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.005145 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.005151 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.005157 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.005163 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.005168 | orchestrator | 2026-03-08 00:56:09.005174 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-08 00:56:09.005180 | orchestrator | Sunday 08 March 2026 00:46:09 +0000 (0:00:00.768) 0:01:23.175 ********** 2026-03-08 00:56:09.005186 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.005191 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.005197 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.005203 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.005210 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.005215 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.005221 | orchestrator | 2026-03-08 00:56:09.005234 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-08 00:56:09.005240 | orchestrator | Sunday 08 March 2026 00:46:10 +0000 (0:00:01.270) 0:01:24.445 ********** 2026-03-08 00:56:09.005246 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.005253 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.005258 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.005262 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.005266 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.005269 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.005273 | orchestrator | 2026-03-08 00:56:09.005277 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-08 00:56:09.005281 | orchestrator | Sunday 08 March 2026 00:46:11 +0000 (0:00:01.171) 0:01:25.616 ********** 2026-03-08 00:56:09.005284 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.005288 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.005292 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.005295 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.005299 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.005303 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.005307 | orchestrator | 2026-03-08 00:56:09.005313 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-08 00:56:09.005319 | orchestrator | Sunday 08 March 2026 00:46:13 +0000 (0:00:01.450) 0:01:27.067 ********** 2026-03-08 00:56:09.005324 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.005339 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.005345 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.005350 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.005405 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.005414 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.005421 | orchestrator | 2026-03-08 00:56:09.005427 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-08 00:56:09.005433 | orchestrator | Sunday 08 March 2026 00:46:14 +0000 (0:00:00.931) 0:01:27.999 ********** 2026-03-08 00:56:09.005441 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.005446 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.005452 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.005458 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.005465 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.005471 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.005477 | orchestrator | 2026-03-08 00:56:09.005484 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-08 00:56:09.005490 | orchestrator | Sunday 08 March 2026 00:46:15 +0000 (0:00:01.246) 0:01:29.245 ********** 2026-03-08 00:56:09.005496 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.005502 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.005508 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.005514 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.005519 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.005524 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.005528 | orchestrator | 2026-03-08 00:56:09.005532 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-08 00:56:09.005538 | orchestrator | Sunday 08 March 2026 00:46:16 +0000 (0:00:00.814) 0:01:30.060 ********** 2026-03-08 00:56:09.005544 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.005550 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.005556 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.005562 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.005569 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.005575 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.005581 | orchestrator | 2026-03-08 00:56:09.005588 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-08 00:56:09.005595 | orchestrator | Sunday 08 March 2026 00:46:17 +0000 (0:00:01.545) 0:01:31.606 ********** 2026-03-08 00:56:09.005602 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.005609 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.005616 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.005623 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.005630 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.005636 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.005642 | orchestrator | 2026-03-08 00:56:09.005648 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-08 00:56:09.005656 | orchestrator | Sunday 08 March 2026 00:46:19 +0000 (0:00:01.926) 0:01:33.533 ********** 2026-03-08 00:56:09.005662 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.005668 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.005674 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.005681 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.005688 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.005696 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.005702 | orchestrator | 2026-03-08 00:56:09.005709 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-08 00:56:09.005717 | orchestrator | Sunday 08 March 2026 00:46:22 +0000 (0:00:02.662) 0:01:36.196 ********** 2026-03-08 00:56:09.005727 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:09.005735 | orchestrator | 2026-03-08 00:56:09.005742 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-08 00:56:09.005757 | orchestrator | Sunday 08 March 2026 00:46:23 +0000 (0:00:01.263) 0:01:37.459 ********** 2026-03-08 00:56:09.005763 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.005769 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.005775 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.005781 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.005787 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.005793 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.005799 | orchestrator | 2026-03-08 00:56:09.005805 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-08 00:56:09.005811 | orchestrator | Sunday 08 March 2026 00:46:24 +0000 (0:00:00.604) 0:01:38.064 ********** 2026-03-08 00:56:09.005818 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.005849 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.005856 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.005861 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.005867 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.005879 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.005885 | orchestrator | 2026-03-08 00:56:09.005891 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-08 00:56:09.005896 | orchestrator | Sunday 08 March 2026 00:46:24 +0000 (0:00:00.850) 0:01:38.914 ********** 2026-03-08 00:56:09.005902 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-08 00:56:09.005908 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-08 00:56:09.005914 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-08 00:56:09.005920 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-08 00:56:09.005927 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-08 00:56:09.005934 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-08 00:56:09.005940 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-08 00:56:09.005947 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-08 00:56:09.005953 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-08 00:56:09.005959 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-08 00:56:09.006007 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-08 00:56:09.006049 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-08 00:56:09.006058 | orchestrator | 2026-03-08 00:56:09.006065 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-08 00:56:09.006071 | orchestrator | Sunday 08 March 2026 00:46:26 +0000 (0:00:01.503) 0:01:40.418 ********** 2026-03-08 00:56:09.006078 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.006085 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.006091 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.006098 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.006106 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.006112 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.006119 | orchestrator | 2026-03-08 00:56:09.006127 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-08 00:56:09.006134 | orchestrator | Sunday 08 March 2026 00:46:28 +0000 (0:00:02.051) 0:01:42.470 ********** 2026-03-08 00:56:09.006140 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.006146 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.006152 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.006159 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.006165 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.006171 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.006190 | orchestrator | 2026-03-08 00:56:09.006196 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-08 00:56:09.006203 | orchestrator | Sunday 08 March 2026 00:46:29 +0000 (0:00:00.675) 0:01:43.145 ********** 2026-03-08 00:56:09.006209 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.006215 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.006221 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.006228 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.006234 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.006240 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.006247 | orchestrator | 2026-03-08 00:56:09.006254 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-08 00:56:09.006261 | orchestrator | Sunday 08 March 2026 00:46:30 +0000 (0:00:00.923) 0:01:44.069 ********** 2026-03-08 00:56:09.006267 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.006275 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.006283 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.006289 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.006297 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.006303 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.006310 | orchestrator | 2026-03-08 00:56:09.006316 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-08 00:56:09.006323 | orchestrator | Sunday 08 March 2026 00:46:30 +0000 (0:00:00.724) 0:01:44.794 ********** 2026-03-08 00:56:09.006330 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:09.006337 | orchestrator | 2026-03-08 00:56:09.006344 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-08 00:56:09.006351 | orchestrator | Sunday 08 March 2026 00:46:32 +0000 (0:00:01.568) 0:01:46.362 ********** 2026-03-08 00:56:09.006358 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.006365 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.006372 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.006379 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.006387 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.006395 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.006402 | orchestrator | 2026-03-08 00:56:09.006409 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-08 00:56:09.006416 | orchestrator | Sunday 08 March 2026 00:47:23 +0000 (0:00:51.275) 0:02:37.638 ********** 2026-03-08 00:56:09.006423 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-08 00:56:09.006430 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-08 00:56:09.006438 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-08 00:56:09.006445 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.006453 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-08 00:56:09.006466 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-08 00:56:09.006474 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-08 00:56:09.006481 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-08 00:56:09.006488 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-08 00:56:09.006495 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-08 00:56:09.006502 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.006509 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-08 00:56:09.006516 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-08 00:56:09.006523 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-08 00:56:09.006541 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.006548 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.006555 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-08 00:56:09.006562 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-08 00:56:09.006569 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-08 00:56:09.006577 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.006623 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-08 00:56:09.006630 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-08 00:56:09.006637 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-08 00:56:09.006643 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.006649 | orchestrator | 2026-03-08 00:56:09.006655 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-08 00:56:09.006661 | orchestrator | Sunday 08 March 2026 00:47:24 +0000 (0:00:00.812) 0:02:38.450 ********** 2026-03-08 00:56:09.006666 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.006673 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.006678 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.006684 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.006691 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.006697 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.006703 | orchestrator | 2026-03-08 00:56:09.006710 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-08 00:56:09.006716 | orchestrator | Sunday 08 March 2026 00:47:25 +0000 (0:00:01.025) 0:02:39.475 ********** 2026-03-08 00:56:09.006723 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.006730 | orchestrator | 2026-03-08 00:56:09.006735 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-08 00:56:09.006742 | orchestrator | Sunday 08 March 2026 00:47:25 +0000 (0:00:00.167) 0:02:39.642 ********** 2026-03-08 00:56:09.006750 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.006756 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.006762 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.006769 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.006775 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.006783 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.006789 | orchestrator | 2026-03-08 00:56:09.006795 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-08 00:56:09.006803 | orchestrator | Sunday 08 March 2026 00:47:26 +0000 (0:00:00.729) 0:02:40.372 ********** 2026-03-08 00:56:09.006809 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.006815 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.006821 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.006846 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.006853 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.006860 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.006866 | orchestrator | 2026-03-08 00:56:09.006873 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-08 00:56:09.006879 | orchestrator | Sunday 08 March 2026 00:47:27 +0000 (0:00:00.893) 0:02:41.265 ********** 2026-03-08 00:56:09.006885 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.006893 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.006899 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.006907 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.006913 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.006919 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.006925 | orchestrator | 2026-03-08 00:56:09.006931 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-08 00:56:09.006937 | orchestrator | Sunday 08 March 2026 00:47:28 +0000 (0:00:00.688) 0:02:41.954 ********** 2026-03-08 00:56:09.006950 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.006956 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.006962 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.006968 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.006975 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.006981 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.006987 | orchestrator | 2026-03-08 00:56:09.006994 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-08 00:56:09.007000 | orchestrator | Sunday 08 March 2026 00:47:30 +0000 (0:00:02.364) 0:02:44.319 ********** 2026-03-08 00:56:09.007006 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.007013 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.007019 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.007026 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.007032 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.007038 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.007045 | orchestrator | 2026-03-08 00:56:09.007052 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-08 00:56:09.007058 | orchestrator | Sunday 08 March 2026 00:47:31 +0000 (0:00:00.617) 0:02:44.937 ********** 2026-03-08 00:56:09.007071 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:09.007079 | orchestrator | 2026-03-08 00:56:09.007085 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-08 00:56:09.007091 | orchestrator | Sunday 08 March 2026 00:47:32 +0000 (0:00:01.271) 0:02:46.208 ********** 2026-03-08 00:56:09.007098 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.007104 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.007110 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.007116 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.007122 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.007128 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.007134 | orchestrator | 2026-03-08 00:56:09.007141 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-08 00:56:09.007147 | orchestrator | Sunday 08 March 2026 00:47:33 +0000 (0:00:00.801) 0:02:47.010 ********** 2026-03-08 00:56:09.007153 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.007158 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.007164 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.007171 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.007178 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.007184 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.007190 | orchestrator | 2026-03-08 00:56:09.007196 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-08 00:56:09.007202 | orchestrator | Sunday 08 March 2026 00:47:33 +0000 (0:00:00.683) 0:02:47.694 ********** 2026-03-08 00:56:09.007208 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.007213 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.007252 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.007260 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.007266 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.007272 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.007277 | orchestrator | 2026-03-08 00:56:09.007284 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-08 00:56:09.007289 | orchestrator | Sunday 08 March 2026 00:47:34 +0000 (0:00:00.922) 0:02:48.616 ********** 2026-03-08 00:56:09.007295 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.007302 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.007308 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.007314 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.007319 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.007325 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.007338 | orchestrator | 2026-03-08 00:56:09.007343 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-08 00:56:09.007349 | orchestrator | Sunday 08 March 2026 00:47:35 +0000 (0:00:00.735) 0:02:49.352 ********** 2026-03-08 00:56:09.007355 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.007362 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.007368 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.007374 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.007380 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.007386 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.007392 | orchestrator | 2026-03-08 00:56:09.007397 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-08 00:56:09.007404 | orchestrator | Sunday 08 March 2026 00:47:36 +0000 (0:00:00.799) 0:02:50.151 ********** 2026-03-08 00:56:09.007410 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.007416 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.007422 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.007428 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.007434 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.007439 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.007445 | orchestrator | 2026-03-08 00:56:09.007451 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-08 00:56:09.007457 | orchestrator | Sunday 08 March 2026 00:47:36 +0000 (0:00:00.742) 0:02:50.894 ********** 2026-03-08 00:56:09.007463 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.007469 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.007475 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.007481 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.007487 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.007493 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.007499 | orchestrator | 2026-03-08 00:56:09.007505 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-08 00:56:09.007511 | orchestrator | Sunday 08 March 2026 00:47:38 +0000 (0:00:01.085) 0:02:51.979 ********** 2026-03-08 00:56:09.007516 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.007523 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.007530 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.007535 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.007542 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.007549 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.007556 | orchestrator | 2026-03-08 00:56:09.007562 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-08 00:56:09.007569 | orchestrator | Sunday 08 March 2026 00:47:38 +0000 (0:00:00.882) 0:02:52.861 ********** 2026-03-08 00:56:09.007576 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.007582 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.007589 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.007596 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.007603 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.007609 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.007615 | orchestrator | 2026-03-08 00:56:09.007622 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-08 00:56:09.007627 | orchestrator | Sunday 08 March 2026 00:47:40 +0000 (0:00:01.399) 0:02:54.261 ********** 2026-03-08 00:56:09.007634 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:09.007641 | orchestrator | 2026-03-08 00:56:09.007647 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-08 00:56:09.007653 | orchestrator | Sunday 08 March 2026 00:47:41 +0000 (0:00:01.187) 0:02:55.449 ********** 2026-03-08 00:56:09.007660 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-08 00:56:09.007672 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-08 00:56:09.007684 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-08 00:56:09.007691 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-08 00:56:09.007697 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-08 00:56:09.007703 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-08 00:56:09.007709 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-08 00:56:09.007715 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-08 00:56:09.007721 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-08 00:56:09.007726 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-08 00:56:09.007732 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-08 00:56:09.007738 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-08 00:56:09.007743 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-08 00:56:09.007749 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-08 00:56:09.007755 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-08 00:56:09.007761 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-08 00:56:09.007767 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-08 00:56:09.007773 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-08 00:56:09.007811 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-08 00:56:09.007821 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-08 00:56:09.007874 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-08 00:56:09.007881 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-08 00:56:09.007887 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-08 00:56:09.007894 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-08 00:56:09.007898 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-08 00:56:09.007902 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-08 00:56:09.007906 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-08 00:56:09.007910 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-08 00:56:09.007915 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-08 00:56:09.007921 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-08 00:56:09.007927 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-08 00:56:09.007933 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-08 00:56:09.007938 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-08 00:56:09.007943 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-08 00:56:09.007949 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-08 00:56:09.007954 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-08 00:56:09.007959 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-08 00:56:09.007969 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-08 00:56:09.007978 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-08 00:56:09.007983 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-08 00:56:09.007989 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-08 00:56:09.007995 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-08 00:56:09.008001 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-08 00:56:09.008007 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-08 00:56:09.008013 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-08 00:56:09.008020 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-08 00:56:09.008032 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-08 00:56:09.008038 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-08 00:56:09.008043 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-08 00:56:09.008053 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-08 00:56:09.008061 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-08 00:56:09.008067 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-08 00:56:09.008073 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-08 00:56:09.008079 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-08 00:56:09.008085 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-08 00:56:09.008091 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-08 00:56:09.008097 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-08 00:56:09.008103 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-08 00:56:09.008108 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-08 00:56:09.008114 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-08 00:56:09.008120 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-08 00:56:09.008125 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-08 00:56:09.008137 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-08 00:56:09.008143 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-08 00:56:09.008149 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-08 00:56:09.008154 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-08 00:56:09.008160 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-08 00:56:09.008166 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-08 00:56:09.008171 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-08 00:56:09.008177 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-08 00:56:09.008183 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-08 00:56:09.008188 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-08 00:56:09.008194 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-08 00:56:09.008201 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-08 00:56:09.008206 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-08 00:56:09.008210 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-08 00:56:09.008240 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-08 00:56:09.008245 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-08 00:56:09.008249 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-08 00:56:09.008252 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-08 00:56:09.008256 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-08 00:56:09.008260 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-08 00:56:09.008263 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-08 00:56:09.008267 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-08 00:56:09.008271 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-08 00:56:09.008275 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-08 00:56:09.008283 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-08 00:56:09.008287 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-08 00:56:09.008290 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-08 00:56:09.008294 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-08 00:56:09.008298 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-08 00:56:09.008302 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-08 00:56:09.008309 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-08 00:56:09.008315 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-08 00:56:09.008325 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-08 00:56:09.008331 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-08 00:56:09.008338 | orchestrator | 2026-03-08 00:56:09.008345 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-08 00:56:09.008352 | orchestrator | Sunday 08 March 2026 00:47:47 +0000 (0:00:06.247) 0:03:01.696 ********** 2026-03-08 00:56:09.008359 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.008366 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.008375 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.008380 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:56:09.008385 | orchestrator | 2026-03-08 00:56:09.008389 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-08 00:56:09.008392 | orchestrator | Sunday 08 March 2026 00:47:48 +0000 (0:00:00.814) 0:03:02.510 ********** 2026-03-08 00:56:09.008396 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-08 00:56:09.008401 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-08 00:56:09.008405 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-08 00:56:09.008408 | orchestrator | 2026-03-08 00:56:09.008412 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-08 00:56:09.008416 | orchestrator | Sunday 08 March 2026 00:47:49 +0000 (0:00:00.835) 0:03:03.346 ********** 2026-03-08 00:56:09.008420 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-08 00:56:09.008423 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-08 00:56:09.008427 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-08 00:56:09.008431 | orchestrator | 2026-03-08 00:56:09.008434 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-08 00:56:09.008443 | orchestrator | Sunday 08 March 2026 00:47:50 +0000 (0:00:01.193) 0:03:04.539 ********** 2026-03-08 00:56:09.008446 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.008450 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.008454 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.008458 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.008461 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.008465 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.008469 | orchestrator | 2026-03-08 00:56:09.008472 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-08 00:56:09.008476 | orchestrator | Sunday 08 March 2026 00:47:51 +0000 (0:00:00.634) 0:03:05.173 ********** 2026-03-08 00:56:09.008480 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.008484 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.008494 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.008497 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.008501 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.008505 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.008508 | orchestrator | 2026-03-08 00:56:09.008512 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-08 00:56:09.008516 | orchestrator | Sunday 08 March 2026 00:47:52 +0000 (0:00:00.768) 0:03:05.942 ********** 2026-03-08 00:56:09.008519 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.008523 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.008527 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.008530 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.008534 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.008538 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.008542 | orchestrator | 2026-03-08 00:56:09.008570 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-08 00:56:09.008578 | orchestrator | Sunday 08 March 2026 00:47:53 +0000 (0:00:01.157) 0:03:07.099 ********** 2026-03-08 00:56:09.008584 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.008589 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.008595 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.008600 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.008605 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.008611 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.008617 | orchestrator | 2026-03-08 00:56:09.008623 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-08 00:56:09.008629 | orchestrator | Sunday 08 March 2026 00:47:53 +0000 (0:00:00.801) 0:03:07.901 ********** 2026-03-08 00:56:09.008634 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.008640 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.008646 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.008651 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.008657 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.008662 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.008668 | orchestrator | 2026-03-08 00:56:09.008674 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-08 00:56:09.008682 | orchestrator | Sunday 08 March 2026 00:47:54 +0000 (0:00:00.666) 0:03:08.568 ********** 2026-03-08 00:56:09.008688 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.008693 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.008698 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.008704 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.008709 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.008716 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.008721 | orchestrator | 2026-03-08 00:56:09.008726 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-08 00:56:09.008732 | orchestrator | Sunday 08 March 2026 00:47:55 +0000 (0:00:00.764) 0:03:09.332 ********** 2026-03-08 00:56:09.008737 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.008743 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.008749 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.008755 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.008761 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.008767 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.008772 | orchestrator | 2026-03-08 00:56:09.008778 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-08 00:56:09.008785 | orchestrator | Sunday 08 March 2026 00:47:56 +0000 (0:00:00.628) 0:03:09.961 ********** 2026-03-08 00:56:09.008791 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.008797 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.008802 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.008813 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.008844 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.008850 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.008856 | orchestrator | 2026-03-08 00:56:09.008861 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-08 00:56:09.008867 | orchestrator | Sunday 08 March 2026 00:47:56 +0000 (0:00:00.931) 0:03:10.893 ********** 2026-03-08 00:56:09.008873 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.008878 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.008884 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.008890 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.008897 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.008901 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.008905 | orchestrator | 2026-03-08 00:56:09.008909 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-08 00:56:09.008915 | orchestrator | Sunday 08 March 2026 00:48:00 +0000 (0:00:03.357) 0:03:14.250 ********** 2026-03-08 00:56:09.008923 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.008931 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.008938 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.008944 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.008950 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.008956 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.008962 | orchestrator | 2026-03-08 00:56:09.008967 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-08 00:56:09.008971 | orchestrator | Sunday 08 March 2026 00:48:01 +0000 (0:00:01.011) 0:03:15.262 ********** 2026-03-08 00:56:09.008976 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.008982 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.008988 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.008994 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.009006 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.009012 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.009018 | orchestrator | 2026-03-08 00:56:09.009024 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-08 00:56:09.009030 | orchestrator | Sunday 08 March 2026 00:48:02 +0000 (0:00:01.320) 0:03:16.583 ********** 2026-03-08 00:56:09.009036 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.009042 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.009048 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.009054 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.009059 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.009064 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.009070 | orchestrator | 2026-03-08 00:56:09.009075 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-08 00:56:09.009080 | orchestrator | Sunday 08 March 2026 00:48:03 +0000 (0:00:00.910) 0:03:17.494 ********** 2026-03-08 00:56:09.009086 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-08 00:56:09.009094 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-08 00:56:09.009100 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-08 00:56:09.009107 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.009142 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.009149 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.009154 | orchestrator | 2026-03-08 00:56:09.009160 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-08 00:56:09.009165 | orchestrator | Sunday 08 March 2026 00:48:04 +0000 (0:00:01.038) 0:03:18.532 ********** 2026-03-08 00:56:09.009174 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-08 00:56:09.009193 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-08 00:56:09.009200 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.009207 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-08 00:56:09.009213 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-08 00:56:09.009220 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.009226 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-08 00:56:09.009233 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-08 00:56:09.009239 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.009245 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.009251 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.009257 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.009263 | orchestrator | 2026-03-08 00:56:09.009269 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-08 00:56:09.009275 | orchestrator | Sunday 08 March 2026 00:48:05 +0000 (0:00:01.286) 0:03:19.819 ********** 2026-03-08 00:56:09.009281 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.009287 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.009293 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.009298 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.009304 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.009310 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.009315 | orchestrator | 2026-03-08 00:56:09.009321 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-08 00:56:09.009327 | orchestrator | Sunday 08 March 2026 00:48:06 +0000 (0:00:00.799) 0:03:20.618 ********** 2026-03-08 00:56:09.009332 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.009338 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.009344 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.009360 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.009364 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.009368 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.009372 | orchestrator | 2026-03-08 00:56:09.009375 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-08 00:56:09.009379 | orchestrator | Sunday 08 March 2026 00:48:07 +0000 (0:00:01.090) 0:03:21.709 ********** 2026-03-08 00:56:09.009383 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.009386 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.009390 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.009397 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.009401 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.009404 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.009408 | orchestrator | 2026-03-08 00:56:09.009412 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-08 00:56:09.009416 | orchestrator | Sunday 08 March 2026 00:48:08 +0000 (0:00:00.792) 0:03:22.501 ********** 2026-03-08 00:56:09.009419 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.009423 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.009427 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.009430 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.009434 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.009438 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.009441 | orchestrator | 2026-03-08 00:56:09.009445 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-08 00:56:09.009466 | orchestrator | Sunday 08 March 2026 00:48:09 +0000 (0:00:00.924) 0:03:23.426 ********** 2026-03-08 00:56:09.009471 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.009474 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.009478 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.009482 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.009486 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.009489 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.009493 | orchestrator | 2026-03-08 00:56:09.009497 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-08 00:56:09.009500 | orchestrator | Sunday 08 March 2026 00:48:10 +0000 (0:00:00.844) 0:03:24.271 ********** 2026-03-08 00:56:09.009504 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.009508 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.009512 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.009515 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.009519 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.009523 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.009527 | orchestrator | 2026-03-08 00:56:09.009530 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-08 00:56:09.009534 | orchestrator | Sunday 08 March 2026 00:48:11 +0000 (0:00:01.231) 0:03:25.503 ********** 2026-03-08 00:56:09.009538 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:56:09.009542 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:56:09.009546 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:56:09.009549 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.009553 | orchestrator | 2026-03-08 00:56:09.009557 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-08 00:56:09.009560 | orchestrator | Sunday 08 March 2026 00:48:12 +0000 (0:00:00.503) 0:03:26.006 ********** 2026-03-08 00:56:09.009564 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:56:09.009568 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:56:09.009572 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:56:09.009575 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.009579 | orchestrator | 2026-03-08 00:56:09.009583 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-08 00:56:09.009587 | orchestrator | Sunday 08 March 2026 00:48:12 +0000 (0:00:00.429) 0:03:26.435 ********** 2026-03-08 00:56:09.009590 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:56:09.009594 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:56:09.009598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:56:09.009601 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.009605 | orchestrator | 2026-03-08 00:56:09.009609 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-08 00:56:09.009616 | orchestrator | Sunday 08 March 2026 00:48:13 +0000 (0:00:00.603) 0:03:27.039 ********** 2026-03-08 00:56:09.009620 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.009623 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.009627 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.009631 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.009635 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.009638 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.009642 | orchestrator | 2026-03-08 00:56:09.009646 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-08 00:56:09.009650 | orchestrator | Sunday 08 March 2026 00:48:14 +0000 (0:00:00.915) 0:03:27.954 ********** 2026-03-08 00:56:09.009653 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-08 00:56:09.009657 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-08 00:56:09.009661 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-08 00:56:09.009665 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.009668 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-08 00:56:09.009672 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.009676 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-08 00:56:09.009680 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-08 00:56:09.009683 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.009687 | orchestrator | 2026-03-08 00:56:09.009691 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-08 00:56:09.009695 | orchestrator | Sunday 08 March 2026 00:48:16 +0000 (0:00:02.606) 0:03:30.561 ********** 2026-03-08 00:56:09.009698 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.009702 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.009708 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.009712 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.009716 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.009720 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.009723 | orchestrator | 2026-03-08 00:56:09.009727 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-08 00:56:09.009731 | orchestrator | Sunday 08 March 2026 00:48:20 +0000 (0:00:03.809) 0:03:34.370 ********** 2026-03-08 00:56:09.009735 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.009738 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.009742 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.009746 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.009749 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.009753 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.009757 | orchestrator | 2026-03-08 00:56:09.009760 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-08 00:56:09.009764 | orchestrator | Sunday 08 March 2026 00:48:21 +0000 (0:00:01.136) 0:03:35.506 ********** 2026-03-08 00:56:09.009768 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.009772 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.009775 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.009780 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:09.009784 | orchestrator | 2026-03-08 00:56:09.009791 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-08 00:56:09.009818 | orchestrator | Sunday 08 March 2026 00:48:22 +0000 (0:00:01.212) 0:03:36.719 ********** 2026-03-08 00:56:09.009841 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.009848 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.009855 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.009861 | orchestrator | 2026-03-08 00:56:09.009868 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-08 00:56:09.009875 | orchestrator | Sunday 08 March 2026 00:48:23 +0000 (0:00:00.382) 0:03:37.101 ********** 2026-03-08 00:56:09.009879 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.009883 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.009892 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.009895 | orchestrator | 2026-03-08 00:56:09.009899 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-08 00:56:09.009903 | orchestrator | Sunday 08 March 2026 00:48:24 +0000 (0:00:01.182) 0:03:38.284 ********** 2026-03-08 00:56:09.009906 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-08 00:56:09.009912 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-08 00:56:09.009918 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-08 00:56:09.009927 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.009934 | orchestrator | 2026-03-08 00:56:09.009939 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-08 00:56:09.009945 | orchestrator | Sunday 08 March 2026 00:48:25 +0000 (0:00:01.228) 0:03:39.513 ********** 2026-03-08 00:56:09.009951 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.009957 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.009962 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.009966 | orchestrator | 2026-03-08 00:56:09.009970 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-08 00:56:09.009974 | orchestrator | Sunday 08 March 2026 00:48:25 +0000 (0:00:00.381) 0:03:39.894 ********** 2026-03-08 00:56:09.009977 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.009981 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.009985 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.009989 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:56:09.009992 | orchestrator | 2026-03-08 00:56:09.009996 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-08 00:56:09.010000 | orchestrator | Sunday 08 March 2026 00:48:27 +0000 (0:00:01.048) 0:03:40.943 ********** 2026-03-08 00:56:09.010004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:56:09.010007 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:56:09.010041 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:56:09.010046 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.010050 | orchestrator | 2026-03-08 00:56:09.010054 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-08 00:56:09.010058 | orchestrator | Sunday 08 March 2026 00:48:27 +0000 (0:00:00.400) 0:03:41.344 ********** 2026-03-08 00:56:09.010062 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.010065 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.010069 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.010073 | orchestrator | 2026-03-08 00:56:09.010077 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-08 00:56:09.010080 | orchestrator | Sunday 08 March 2026 00:48:27 +0000 (0:00:00.358) 0:03:41.702 ********** 2026-03-08 00:56:09.010084 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.010088 | orchestrator | 2026-03-08 00:56:09.010092 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-08 00:56:09.010095 | orchestrator | Sunday 08 March 2026 00:48:28 +0000 (0:00:00.256) 0:03:41.959 ********** 2026-03-08 00:56:09.010099 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.010103 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.010108 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.010114 | orchestrator | 2026-03-08 00:56:09.010120 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-08 00:56:09.010126 | orchestrator | Sunday 08 March 2026 00:48:28 +0000 (0:00:00.359) 0:03:42.318 ********** 2026-03-08 00:56:09.010133 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.010141 | orchestrator | 2026-03-08 00:56:09.010147 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-08 00:56:09.010153 | orchestrator | Sunday 08 March 2026 00:48:28 +0000 (0:00:00.278) 0:03:42.597 ********** 2026-03-08 00:56:09.010165 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.010170 | orchestrator | 2026-03-08 00:56:09.010180 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-08 00:56:09.010186 | orchestrator | Sunday 08 March 2026 00:48:28 +0000 (0:00:00.230) 0:03:42.828 ********** 2026-03-08 00:56:09.010192 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.010198 | orchestrator | 2026-03-08 00:56:09.010203 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-08 00:56:09.010209 | orchestrator | Sunday 08 March 2026 00:48:29 +0000 (0:00:00.144) 0:03:42.972 ********** 2026-03-08 00:56:09.010216 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.010222 | orchestrator | 2026-03-08 00:56:09.010228 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-08 00:56:09.010234 | orchestrator | Sunday 08 March 2026 00:48:29 +0000 (0:00:00.748) 0:03:43.721 ********** 2026-03-08 00:56:09.010240 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.010246 | orchestrator | 2026-03-08 00:56:09.010251 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-08 00:56:09.010257 | orchestrator | Sunday 08 March 2026 00:48:30 +0000 (0:00:00.215) 0:03:43.937 ********** 2026-03-08 00:56:09.010263 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:56:09.010269 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:56:09.010275 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:56:09.010281 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.010287 | orchestrator | 2026-03-08 00:56:09.010290 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-08 00:56:09.010318 | orchestrator | Sunday 08 March 2026 00:48:30 +0000 (0:00:00.416) 0:03:44.354 ********** 2026-03-08 00:56:09.010323 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.010327 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.010330 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.010334 | orchestrator | 2026-03-08 00:56:09.010338 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-08 00:56:09.010344 | orchestrator | Sunday 08 March 2026 00:48:30 +0000 (0:00:00.432) 0:03:44.787 ********** 2026-03-08 00:56:09.010351 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.010357 | orchestrator | 2026-03-08 00:56:09.010364 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-08 00:56:09.010371 | orchestrator | Sunday 08 March 2026 00:48:31 +0000 (0:00:00.225) 0:03:45.012 ********** 2026-03-08 00:56:09.010377 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.010384 | orchestrator | 2026-03-08 00:56:09.010391 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-08 00:56:09.010398 | orchestrator | Sunday 08 March 2026 00:48:31 +0000 (0:00:00.221) 0:03:45.233 ********** 2026-03-08 00:56:09.010405 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.010413 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.010422 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.010430 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:56:09.010436 | orchestrator | 2026-03-08 00:56:09.010443 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-08 00:56:09.010450 | orchestrator | Sunday 08 March 2026 00:48:32 +0000 (0:00:01.126) 0:03:46.360 ********** 2026-03-08 00:56:09.010457 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.010463 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.010470 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.010476 | orchestrator | 2026-03-08 00:56:09.010482 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-08 00:56:09.010489 | orchestrator | Sunday 08 March 2026 00:48:32 +0000 (0:00:00.341) 0:03:46.702 ********** 2026-03-08 00:56:09.010496 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.010503 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.010519 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.010523 | orchestrator | 2026-03-08 00:56:09.010527 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-08 00:56:09.010530 | orchestrator | Sunday 08 March 2026 00:48:34 +0000 (0:00:01.495) 0:03:48.197 ********** 2026-03-08 00:56:09.010534 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:56:09.010538 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:56:09.010542 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:56:09.010545 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.010549 | orchestrator | 2026-03-08 00:56:09.010556 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-08 00:56:09.010562 | orchestrator | Sunday 08 March 2026 00:48:35 +0000 (0:00:00.956) 0:03:49.154 ********** 2026-03-08 00:56:09.010568 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.010574 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.010580 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.010586 | orchestrator | 2026-03-08 00:56:09.010592 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-08 00:56:09.010598 | orchestrator | Sunday 08 March 2026 00:48:35 +0000 (0:00:00.694) 0:03:49.848 ********** 2026-03-08 00:56:09.010604 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.010610 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.010616 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.010622 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:56:09.010628 | orchestrator | 2026-03-08 00:56:09.010634 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-08 00:56:09.010640 | orchestrator | Sunday 08 March 2026 00:48:36 +0000 (0:00:01.064) 0:03:50.913 ********** 2026-03-08 00:56:09.010646 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.010652 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.010658 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.010664 | orchestrator | 2026-03-08 00:56:09.010670 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-08 00:56:09.010675 | orchestrator | Sunday 08 March 2026 00:48:37 +0000 (0:00:00.648) 0:03:51.562 ********** 2026-03-08 00:56:09.010681 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.010688 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.010699 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.010707 | orchestrator | 2026-03-08 00:56:09.010713 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-08 00:56:09.010719 | orchestrator | Sunday 08 March 2026 00:48:38 +0000 (0:00:01.353) 0:03:52.915 ********** 2026-03-08 00:56:09.010725 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:56:09.010733 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:56:09.010736 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:56:09.010740 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.010744 | orchestrator | 2026-03-08 00:56:09.010748 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-08 00:56:09.010751 | orchestrator | Sunday 08 March 2026 00:48:39 +0000 (0:00:00.766) 0:03:53.681 ********** 2026-03-08 00:56:09.010755 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.010759 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.010764 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.010770 | orchestrator | 2026-03-08 00:56:09.010777 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-08 00:56:09.010782 | orchestrator | Sunday 08 March 2026 00:48:40 +0000 (0:00:00.402) 0:03:54.084 ********** 2026-03-08 00:56:09.010788 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.010795 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.010801 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.010813 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.010820 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.010897 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.010906 | orchestrator | 2026-03-08 00:56:09.010912 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-08 00:56:09.010919 | orchestrator | Sunday 08 March 2026 00:48:41 +0000 (0:00:01.289) 0:03:55.373 ********** 2026-03-08 00:56:09.010925 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.010931 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.010937 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.010944 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:09.010950 | orchestrator | 2026-03-08 00:56:09.010957 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-08 00:56:09.010964 | orchestrator | Sunday 08 March 2026 00:48:42 +0000 (0:00:00.859) 0:03:56.232 ********** 2026-03-08 00:56:09.010970 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.010976 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.010982 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.010989 | orchestrator | 2026-03-08 00:56:09.010995 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-08 00:56:09.011001 | orchestrator | Sunday 08 March 2026 00:48:42 +0000 (0:00:00.653) 0:03:56.886 ********** 2026-03-08 00:56:09.011008 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.011014 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.011020 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.011026 | orchestrator | 2026-03-08 00:56:09.011033 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-08 00:56:09.011039 | orchestrator | Sunday 08 March 2026 00:48:44 +0000 (0:00:01.342) 0:03:58.228 ********** 2026-03-08 00:56:09.011045 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-08 00:56:09.011052 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-08 00:56:09.011058 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-08 00:56:09.011064 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.011070 | orchestrator | 2026-03-08 00:56:09.011076 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-08 00:56:09.011083 | orchestrator | Sunday 08 March 2026 00:48:44 +0000 (0:00:00.623) 0:03:58.852 ********** 2026-03-08 00:56:09.011089 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.011096 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.011102 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.011108 | orchestrator | 2026-03-08 00:56:09.011114 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-08 00:56:09.011120 | orchestrator | 2026-03-08 00:56:09.011127 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-08 00:56:09.011133 | orchestrator | Sunday 08 March 2026 00:48:45 +0000 (0:00:00.651) 0:03:59.503 ********** 2026-03-08 00:56:09.011140 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:09.011147 | orchestrator | 2026-03-08 00:56:09.011153 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-08 00:56:09.011159 | orchestrator | Sunday 08 March 2026 00:48:46 +0000 (0:00:00.816) 0:04:00.319 ********** 2026-03-08 00:56:09.011165 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:09.011172 | orchestrator | 2026-03-08 00:56:09.011178 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-08 00:56:09.011185 | orchestrator | Sunday 08 March 2026 00:48:46 +0000 (0:00:00.602) 0:04:00.922 ********** 2026-03-08 00:56:09.011191 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.011197 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.011204 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.011216 | orchestrator | 2026-03-08 00:56:09.011222 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-08 00:56:09.011228 | orchestrator | Sunday 08 March 2026 00:48:48 +0000 (0:00:01.314) 0:04:02.236 ********** 2026-03-08 00:56:09.011234 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.011241 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.011247 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.011253 | orchestrator | 2026-03-08 00:56:09.011260 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-08 00:56:09.011264 | orchestrator | Sunday 08 March 2026 00:48:48 +0000 (0:00:00.389) 0:04:02.625 ********** 2026-03-08 00:56:09.011267 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.011275 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.011279 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.011283 | orchestrator | 2026-03-08 00:56:09.011287 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-08 00:56:09.011291 | orchestrator | Sunday 08 March 2026 00:48:49 +0000 (0:00:00.372) 0:04:02.998 ********** 2026-03-08 00:56:09.011294 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.011298 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.011302 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.011306 | orchestrator | 2026-03-08 00:56:09.011309 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-08 00:56:09.011313 | orchestrator | Sunday 08 March 2026 00:48:49 +0000 (0:00:00.335) 0:04:03.333 ********** 2026-03-08 00:56:09.011317 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.011321 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.011324 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.011328 | orchestrator | 2026-03-08 00:56:09.011332 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-08 00:56:09.011336 | orchestrator | Sunday 08 March 2026 00:48:50 +0000 (0:00:01.132) 0:04:04.466 ********** 2026-03-08 00:56:09.011339 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.011343 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.011347 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.011350 | orchestrator | 2026-03-08 00:56:09.011354 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-08 00:56:09.011358 | orchestrator | Sunday 08 March 2026 00:48:50 +0000 (0:00:00.393) 0:04:04.859 ********** 2026-03-08 00:56:09.011383 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.011387 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.011391 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.011395 | orchestrator | 2026-03-08 00:56:09.011399 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-08 00:56:09.011403 | orchestrator | Sunday 08 March 2026 00:48:51 +0000 (0:00:00.323) 0:04:05.182 ********** 2026-03-08 00:56:09.011406 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.011410 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.011414 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.011418 | orchestrator | 2026-03-08 00:56:09.011422 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-08 00:56:09.011428 | orchestrator | Sunday 08 March 2026 00:48:52 +0000 (0:00:00.807) 0:04:05.990 ********** 2026-03-08 00:56:09.011435 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.011442 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.011448 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.011455 | orchestrator | 2026-03-08 00:56:09.011462 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-08 00:56:09.011469 | orchestrator | Sunday 08 March 2026 00:48:52 +0000 (0:00:00.739) 0:04:06.729 ********** 2026-03-08 00:56:09.011473 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.011477 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.011481 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.011484 | orchestrator | 2026-03-08 00:56:09.011488 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-08 00:56:09.011496 | orchestrator | Sunday 08 March 2026 00:48:53 +0000 (0:00:00.630) 0:04:07.359 ********** 2026-03-08 00:56:09.011500 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.011506 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.011512 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.011517 | orchestrator | 2026-03-08 00:56:09.011525 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-08 00:56:09.011533 | orchestrator | Sunday 08 March 2026 00:48:53 +0000 (0:00:00.410) 0:04:07.770 ********** 2026-03-08 00:56:09.011540 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.011546 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.011552 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.011557 | orchestrator | 2026-03-08 00:56:09.011563 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-08 00:56:09.011568 | orchestrator | Sunday 08 March 2026 00:48:54 +0000 (0:00:00.380) 0:04:08.151 ********** 2026-03-08 00:56:09.011574 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.011579 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.011584 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.011589 | orchestrator | 2026-03-08 00:56:09.011594 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-08 00:56:09.011600 | orchestrator | Sunday 08 March 2026 00:48:54 +0000 (0:00:00.364) 0:04:08.515 ********** 2026-03-08 00:56:09.011606 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.011611 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.011616 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.011623 | orchestrator | 2026-03-08 00:56:09.011628 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-08 00:56:09.011634 | orchestrator | Sunday 08 March 2026 00:48:55 +0000 (0:00:00.634) 0:04:09.150 ********** 2026-03-08 00:56:09.011640 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.011646 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.011650 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.011654 | orchestrator | 2026-03-08 00:56:09.011658 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-08 00:56:09.011661 | orchestrator | Sunday 08 March 2026 00:48:55 +0000 (0:00:00.410) 0:04:09.560 ********** 2026-03-08 00:56:09.011665 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.011669 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.011673 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.011679 | orchestrator | 2026-03-08 00:56:09.011685 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-08 00:56:09.011691 | orchestrator | Sunday 08 March 2026 00:48:55 +0000 (0:00:00.349) 0:04:09.910 ********** 2026-03-08 00:56:09.011697 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.011702 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.011708 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.011713 | orchestrator | 2026-03-08 00:56:09.011720 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-08 00:56:09.011725 | orchestrator | Sunday 08 March 2026 00:48:56 +0000 (0:00:00.395) 0:04:10.306 ********** 2026-03-08 00:56:09.011730 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.011735 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.011747 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.011752 | orchestrator | 2026-03-08 00:56:09.011758 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-08 00:56:09.011765 | orchestrator | Sunday 08 March 2026 00:48:57 +0000 (0:00:00.683) 0:04:10.990 ********** 2026-03-08 00:56:09.011771 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.011777 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.011783 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.011790 | orchestrator | 2026-03-08 00:56:09.011796 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-08 00:56:09.011801 | orchestrator | Sunday 08 March 2026 00:48:57 +0000 (0:00:00.657) 0:04:11.647 ********** 2026-03-08 00:56:09.011814 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.011817 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.011821 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.011851 | orchestrator | 2026-03-08 00:56:09.011857 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-08 00:56:09.011863 | orchestrator | Sunday 08 March 2026 00:48:58 +0000 (0:00:00.422) 0:04:12.070 ********** 2026-03-08 00:56:09.011868 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:09.011874 | orchestrator | 2026-03-08 00:56:09.011879 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-08 00:56:09.011885 | orchestrator | Sunday 08 March 2026 00:48:59 +0000 (0:00:01.005) 0:04:13.075 ********** 2026-03-08 00:56:09.011890 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.011896 | orchestrator | 2026-03-08 00:56:09.011934 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-08 00:56:09.011942 | orchestrator | Sunday 08 March 2026 00:48:59 +0000 (0:00:00.174) 0:04:13.249 ********** 2026-03-08 00:56:09.011948 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-08 00:56:09.011954 | orchestrator | 2026-03-08 00:56:09.011960 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-08 00:56:09.011966 | orchestrator | Sunday 08 March 2026 00:49:01 +0000 (0:00:01.679) 0:04:14.929 ********** 2026-03-08 00:56:09.011972 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.011978 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.011985 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.011991 | orchestrator | 2026-03-08 00:56:09.011998 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-08 00:56:09.012002 | orchestrator | Sunday 08 March 2026 00:49:01 +0000 (0:00:00.713) 0:04:15.642 ********** 2026-03-08 00:56:09.012006 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.012010 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.012013 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.012017 | orchestrator | 2026-03-08 00:56:09.012021 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-08 00:56:09.012024 | orchestrator | Sunday 08 March 2026 00:49:02 +0000 (0:00:00.457) 0:04:16.100 ********** 2026-03-08 00:56:09.012028 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.012032 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.012036 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.012039 | orchestrator | 2026-03-08 00:56:09.012045 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-08 00:56:09.012051 | orchestrator | Sunday 08 March 2026 00:49:03 +0000 (0:00:01.398) 0:04:17.499 ********** 2026-03-08 00:56:09.012058 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.012066 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.012073 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.012078 | orchestrator | 2026-03-08 00:56:09.012084 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-08 00:56:09.012090 | orchestrator | Sunday 08 March 2026 00:49:04 +0000 (0:00:01.106) 0:04:18.605 ********** 2026-03-08 00:56:09.012096 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.012101 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.012107 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.012113 | orchestrator | 2026-03-08 00:56:09.012120 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-08 00:56:09.012126 | orchestrator | Sunday 08 March 2026 00:49:05 +0000 (0:00:01.094) 0:04:19.699 ********** 2026-03-08 00:56:09.012132 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.012138 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.012142 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.012145 | orchestrator | 2026-03-08 00:56:09.012149 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-08 00:56:09.012158 | orchestrator | Sunday 08 March 2026 00:49:07 +0000 (0:00:01.438) 0:04:21.137 ********** 2026-03-08 00:56:09.012162 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.012166 | orchestrator | 2026-03-08 00:56:09.012170 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-08 00:56:09.012173 | orchestrator | Sunday 08 March 2026 00:49:08 +0000 (0:00:01.657) 0:04:22.795 ********** 2026-03-08 00:56:09.012177 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.012181 | orchestrator | 2026-03-08 00:56:09.012184 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-08 00:56:09.012188 | orchestrator | Sunday 08 March 2026 00:49:10 +0000 (0:00:01.428) 0:04:24.224 ********** 2026-03-08 00:56:09.012192 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-08 00:56:09.012195 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:56:09.012199 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:56:09.012203 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-08 00:56:09.012207 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-08 00:56:09.012211 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-08 00:56:09.012215 | orchestrator | changed: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-08 00:56:09.012218 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2026-03-08 00:56:09.012222 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-08 00:56:09.012226 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-08 00:56:09.012234 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-08 00:56:09.012238 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-08 00:56:09.012242 | orchestrator | 2026-03-08 00:56:09.012245 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-08 00:56:09.012249 | orchestrator | Sunday 08 March 2026 00:49:14 +0000 (0:00:04.036) 0:04:28.260 ********** 2026-03-08 00:56:09.012253 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.012258 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.012264 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.012273 | orchestrator | 2026-03-08 00:56:09.012280 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-08 00:56:09.012286 | orchestrator | Sunday 08 March 2026 00:49:16 +0000 (0:00:02.462) 0:04:30.723 ********** 2026-03-08 00:56:09.012291 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.012297 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.012303 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.012308 | orchestrator | 2026-03-08 00:56:09.012314 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-08 00:56:09.012320 | orchestrator | Sunday 08 March 2026 00:49:17 +0000 (0:00:00.386) 0:04:31.109 ********** 2026-03-08 00:56:09.012326 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.012334 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.012337 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.012341 | orchestrator | 2026-03-08 00:56:09.012345 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-08 00:56:09.012349 | orchestrator | Sunday 08 March 2026 00:49:17 +0000 (0:00:00.313) 0:04:31.422 ********** 2026-03-08 00:56:09.012375 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.012379 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.012383 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.012387 | orchestrator | 2026-03-08 00:56:09.012390 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-08 00:56:09.012394 | orchestrator | Sunday 08 March 2026 00:49:19 +0000 (0:00:01.935) 0:04:33.357 ********** 2026-03-08 00:56:09.012398 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.012401 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.012405 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.012409 | orchestrator | 2026-03-08 00:56:09.012417 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-08 00:56:09.012421 | orchestrator | Sunday 08 March 2026 00:49:20 +0000 (0:00:01.522) 0:04:34.880 ********** 2026-03-08 00:56:09.012424 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.012428 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.012432 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.012435 | orchestrator | 2026-03-08 00:56:09.012439 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-08 00:56:09.012443 | orchestrator | Sunday 08 March 2026 00:49:21 +0000 (0:00:00.457) 0:04:35.337 ********** 2026-03-08 00:56:09.012447 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:09.012450 | orchestrator | 2026-03-08 00:56:09.012454 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-08 00:56:09.012458 | orchestrator | Sunday 08 March 2026 00:49:22 +0000 (0:00:01.003) 0:04:36.341 ********** 2026-03-08 00:56:09.012461 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.012465 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.012469 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.012473 | orchestrator | 2026-03-08 00:56:09.012476 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-08 00:56:09.012480 | orchestrator | Sunday 08 March 2026 00:49:23 +0000 (0:00:00.703) 0:04:37.044 ********** 2026-03-08 00:56:09.012484 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.012488 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.012491 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.012495 | orchestrator | 2026-03-08 00:56:09.012499 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-08 00:56:09.012502 | orchestrator | Sunday 08 March 2026 00:49:23 +0000 (0:00:00.544) 0:04:37.588 ********** 2026-03-08 00:56:09.012506 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:09.012511 | orchestrator | 2026-03-08 00:56:09.012515 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-08 00:56:09.012518 | orchestrator | Sunday 08 March 2026 00:49:24 +0000 (0:00:01.012) 0:04:38.601 ********** 2026-03-08 00:56:09.012522 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.012526 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.012529 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.012533 | orchestrator | 2026-03-08 00:56:09.012537 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-08 00:56:09.012541 | orchestrator | Sunday 08 March 2026 00:49:26 +0000 (0:00:01.993) 0:04:40.594 ********** 2026-03-08 00:56:09.012544 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.012548 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.012552 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.012555 | orchestrator | 2026-03-08 00:56:09.012559 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-08 00:56:09.012563 | orchestrator | Sunday 08 March 2026 00:49:28 +0000 (0:00:01.515) 0:04:42.110 ********** 2026-03-08 00:56:09.012567 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.012570 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.012574 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.012578 | orchestrator | 2026-03-08 00:56:09.012582 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-08 00:56:09.012586 | orchestrator | Sunday 08 March 2026 00:49:29 +0000 (0:00:01.620) 0:04:43.731 ********** 2026-03-08 00:56:09.012589 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.012593 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.012597 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.012601 | orchestrator | 2026-03-08 00:56:09.012604 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-08 00:56:09.012609 | orchestrator | Sunday 08 March 2026 00:49:31 +0000 (0:00:02.189) 0:04:45.920 ********** 2026-03-08 00:56:09.012625 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:09.012631 | orchestrator | 2026-03-08 00:56:09.012637 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-08 00:56:09.012643 | orchestrator | Sunday 08 March 2026 00:49:32 +0000 (0:00:00.554) 0:04:46.475 ********** 2026-03-08 00:56:09.012649 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-08 00:56:09.012655 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.012661 | orchestrator | 2026-03-08 00:56:09.012668 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-08 00:56:09.012672 | orchestrator | Sunday 08 March 2026 00:49:54 +0000 (0:00:21.830) 0:05:08.305 ********** 2026-03-08 00:56:09.012676 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.012679 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.012683 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.012687 | orchestrator | 2026-03-08 00:56:09.012690 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-08 00:56:09.012694 | orchestrator | Sunday 08 March 2026 00:50:03 +0000 (0:00:09.086) 0:05:17.391 ********** 2026-03-08 00:56:09.012698 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.012701 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.012705 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.012709 | orchestrator | 2026-03-08 00:56:09.012712 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-08 00:56:09.012733 | orchestrator | Sunday 08 March 2026 00:50:04 +0000 (0:00:00.618) 0:05:18.010 ********** 2026-03-08 00:56:09.012741 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9bb56d22cc516385f6b971fc4096d23c2b432a17'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-08 00:56:09.012749 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9bb56d22cc516385f6b971fc4096d23c2b432a17'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-08 00:56:09.012757 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9bb56d22cc516385f6b971fc4096d23c2b432a17'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-08 00:56:09.012766 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9bb56d22cc516385f6b971fc4096d23c2b432a17'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-08 00:56:09.012775 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9bb56d22cc516385f6b971fc4096d23c2b432a17'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-08 00:56:09.012786 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9bb56d22cc516385f6b971fc4096d23c2b432a17'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__9bb56d22cc516385f6b971fc4096d23c2b432a17'}])  2026-03-08 00:56:09.012800 | orchestrator | 2026-03-08 00:56:09.012807 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-08 00:56:09.012814 | orchestrator | Sunday 08 March 2026 00:50:18 +0000 (0:00:14.382) 0:05:32.393 ********** 2026-03-08 00:56:09.012820 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.012848 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.012854 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.012860 | orchestrator | 2026-03-08 00:56:09.012866 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-08 00:56:09.012872 | orchestrator | Sunday 08 March 2026 00:50:18 +0000 (0:00:00.344) 0:05:32.738 ********** 2026-03-08 00:56:09.012878 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:09.012882 | orchestrator | 2026-03-08 00:56:09.012886 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-08 00:56:09.012893 | orchestrator | Sunday 08 March 2026 00:50:19 +0000 (0:00:00.789) 0:05:33.527 ********** 2026-03-08 00:56:09.012897 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.012900 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.012904 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.012908 | orchestrator | 2026-03-08 00:56:09.012912 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-08 00:56:09.012917 | orchestrator | Sunday 08 March 2026 00:50:19 +0000 (0:00:00.328) 0:05:33.856 ********** 2026-03-08 00:56:09.012923 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.012929 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.012938 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.012945 | orchestrator | 2026-03-08 00:56:09.012952 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-08 00:56:09.012957 | orchestrator | Sunday 08 March 2026 00:50:20 +0000 (0:00:00.412) 0:05:34.268 ********** 2026-03-08 00:56:09.012963 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-08 00:56:09.012969 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-08 00:56:09.012975 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-08 00:56:09.012982 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.012988 | orchestrator | 2026-03-08 00:56:09.012993 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-08 00:56:09.012997 | orchestrator | Sunday 08 March 2026 00:50:21 +0000 (0:00:00.855) 0:05:35.124 ********** 2026-03-08 00:56:09.013001 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.013024 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.013028 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.013032 | orchestrator | 2026-03-08 00:56:09.013036 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-08 00:56:09.013039 | orchestrator | 2026-03-08 00:56:09.013043 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-08 00:56:09.013047 | orchestrator | Sunday 08 March 2026 00:50:22 +0000 (0:00:00.806) 0:05:35.930 ********** 2026-03-08 00:56:09.013051 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:09.013055 | orchestrator | 2026-03-08 00:56:09.013059 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-08 00:56:09.013063 | orchestrator | Sunday 08 March 2026 00:50:22 +0000 (0:00:00.524) 0:05:36.455 ********** 2026-03-08 00:56:09.013067 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:09.013070 | orchestrator | 2026-03-08 00:56:09.013074 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-08 00:56:09.013083 | orchestrator | Sunday 08 March 2026 00:50:23 +0000 (0:00:00.772) 0:05:37.227 ********** 2026-03-08 00:56:09.013087 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.013090 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.013094 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.013098 | orchestrator | 2026-03-08 00:56:09.013102 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-08 00:56:09.013105 | orchestrator | Sunday 08 March 2026 00:50:24 +0000 (0:00:00.812) 0:05:38.040 ********** 2026-03-08 00:56:09.013109 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.013113 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.013117 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.013120 | orchestrator | 2026-03-08 00:56:09.013124 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-08 00:56:09.013128 | orchestrator | Sunday 08 March 2026 00:50:24 +0000 (0:00:00.379) 0:05:38.419 ********** 2026-03-08 00:56:09.013131 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.013135 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.013139 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.013143 | orchestrator | 2026-03-08 00:56:09.013146 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-08 00:56:09.013150 | orchestrator | Sunday 08 March 2026 00:50:25 +0000 (0:00:00.552) 0:05:38.971 ********** 2026-03-08 00:56:09.013154 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.013157 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.013161 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.013165 | orchestrator | 2026-03-08 00:56:09.013169 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-08 00:56:09.013172 | orchestrator | Sunday 08 March 2026 00:50:25 +0000 (0:00:00.323) 0:05:39.295 ********** 2026-03-08 00:56:09.013176 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.013180 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.013184 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.013187 | orchestrator | 2026-03-08 00:56:09.013191 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-08 00:56:09.013195 | orchestrator | Sunday 08 March 2026 00:50:26 +0000 (0:00:00.820) 0:05:40.116 ********** 2026-03-08 00:56:09.013198 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.013202 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.013206 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.013209 | orchestrator | 2026-03-08 00:56:09.013213 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-08 00:56:09.013217 | orchestrator | Sunday 08 March 2026 00:50:26 +0000 (0:00:00.325) 0:05:40.441 ********** 2026-03-08 00:56:09.013221 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.013224 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.013228 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.013232 | orchestrator | 2026-03-08 00:56:09.013235 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-08 00:56:09.013239 | orchestrator | Sunday 08 March 2026 00:50:26 +0000 (0:00:00.332) 0:05:40.773 ********** 2026-03-08 00:56:09.013243 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.013247 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.013250 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.013254 | orchestrator | 2026-03-08 00:56:09.013258 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-08 00:56:09.013262 | orchestrator | Sunday 08 March 2026 00:50:27 +0000 (0:00:01.016) 0:05:41.790 ********** 2026-03-08 00:56:09.013269 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.013272 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.013276 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.013280 | orchestrator | 2026-03-08 00:56:09.013284 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-08 00:56:09.013287 | orchestrator | Sunday 08 March 2026 00:50:28 +0000 (0:00:00.766) 0:05:42.557 ********** 2026-03-08 00:56:09.013295 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.013298 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.013302 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.013306 | orchestrator | 2026-03-08 00:56:09.013310 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-08 00:56:09.013314 | orchestrator | Sunday 08 March 2026 00:50:28 +0000 (0:00:00.339) 0:05:42.896 ********** 2026-03-08 00:56:09.013317 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.013321 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.013325 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.013329 | orchestrator | 2026-03-08 00:56:09.013332 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-08 00:56:09.013336 | orchestrator | Sunday 08 March 2026 00:50:29 +0000 (0:00:00.336) 0:05:43.232 ********** 2026-03-08 00:56:09.013340 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.013344 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.013347 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.013351 | orchestrator | 2026-03-08 00:56:09.013355 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-08 00:56:09.013370 | orchestrator | Sunday 08 March 2026 00:50:29 +0000 (0:00:00.619) 0:05:43.851 ********** 2026-03-08 00:56:09.013375 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.013378 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.013382 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.013386 | orchestrator | 2026-03-08 00:56:09.013390 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-08 00:56:09.013393 | orchestrator | Sunday 08 March 2026 00:50:30 +0000 (0:00:00.353) 0:05:44.205 ********** 2026-03-08 00:56:09.013397 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.013401 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.013404 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.013408 | orchestrator | 2026-03-08 00:56:09.013412 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-08 00:56:09.013416 | orchestrator | Sunday 08 March 2026 00:50:30 +0000 (0:00:00.364) 0:05:44.569 ********** 2026-03-08 00:56:09.013419 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.013423 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.013427 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.013430 | orchestrator | 2026-03-08 00:56:09.013434 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-08 00:56:09.013438 | orchestrator | Sunday 08 March 2026 00:50:30 +0000 (0:00:00.334) 0:05:44.904 ********** 2026-03-08 00:56:09.013442 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.013445 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.013449 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.013453 | orchestrator | 2026-03-08 00:56:09.013456 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-08 00:56:09.013460 | orchestrator | Sunday 08 March 2026 00:50:31 +0000 (0:00:00.585) 0:05:45.490 ********** 2026-03-08 00:56:09.013464 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.013468 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.013471 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.013475 | orchestrator | 2026-03-08 00:56:09.013479 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-08 00:56:09.013482 | orchestrator | Sunday 08 March 2026 00:50:31 +0000 (0:00:00.335) 0:05:45.826 ********** 2026-03-08 00:56:09.013486 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.013490 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.013493 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.013497 | orchestrator | 2026-03-08 00:56:09.013501 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-08 00:56:09.013504 | orchestrator | Sunday 08 March 2026 00:50:32 +0000 (0:00:00.362) 0:05:46.188 ********** 2026-03-08 00:56:09.013508 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.013512 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.013518 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.013522 | orchestrator | 2026-03-08 00:56:09.013526 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-08 00:56:09.013530 | orchestrator | Sunday 08 March 2026 00:50:33 +0000 (0:00:00.824) 0:05:47.013 ********** 2026-03-08 00:56:09.013534 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-08 00:56:09.013537 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-08 00:56:09.013541 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-08 00:56:09.013545 | orchestrator | 2026-03-08 00:56:09.013548 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-08 00:56:09.013552 | orchestrator | Sunday 08 March 2026 00:50:33 +0000 (0:00:00.635) 0:05:47.649 ********** 2026-03-08 00:56:09.013556 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:09.013559 | orchestrator | 2026-03-08 00:56:09.013563 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-08 00:56:09.013567 | orchestrator | Sunday 08 March 2026 00:50:34 +0000 (0:00:00.602) 0:05:48.251 ********** 2026-03-08 00:56:09.013571 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.013574 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.013578 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.013582 | orchestrator | 2026-03-08 00:56:09.013585 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-08 00:56:09.013589 | orchestrator | Sunday 08 March 2026 00:50:35 +0000 (0:00:00.700) 0:05:48.952 ********** 2026-03-08 00:56:09.013593 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.013596 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.013600 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.013604 | orchestrator | 2026-03-08 00:56:09.013610 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-08 00:56:09.013614 | orchestrator | Sunday 08 March 2026 00:50:35 +0000 (0:00:00.610) 0:05:49.562 ********** 2026-03-08 00:56:09.013618 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-08 00:56:09.013622 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-08 00:56:09.013626 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-08 00:56:09.013629 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-08 00:56:09.013633 | orchestrator | 2026-03-08 00:56:09.013637 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-08 00:56:09.013640 | orchestrator | Sunday 08 March 2026 00:50:46 +0000 (0:00:10.534) 0:06:00.096 ********** 2026-03-08 00:56:09.013644 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.013648 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.013652 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.013655 | orchestrator | 2026-03-08 00:56:09.013659 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-08 00:56:09.013663 | orchestrator | Sunday 08 March 2026 00:50:46 +0000 (0:00:00.392) 0:06:00.488 ********** 2026-03-08 00:56:09.013666 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-08 00:56:09.013670 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-08 00:56:09.013674 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-08 00:56:09.013677 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:56:09.013681 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-08 00:56:09.013698 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:56:09.013702 | orchestrator | 2026-03-08 00:56:09.013706 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-08 00:56:09.013710 | orchestrator | Sunday 08 March 2026 00:50:48 +0000 (0:00:02.067) 0:06:02.556 ********** 2026-03-08 00:56:09.013713 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-08 00:56:09.013720 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-08 00:56:09.013724 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-08 00:56:09.013727 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-08 00:56:09.013731 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-08 00:56:09.013735 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-08 00:56:09.013739 | orchestrator | 2026-03-08 00:56:09.013742 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-08 00:56:09.013746 | orchestrator | Sunday 08 March 2026 00:50:49 +0000 (0:00:01.276) 0:06:03.833 ********** 2026-03-08 00:56:09.013750 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.013753 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.013757 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.013761 | orchestrator | 2026-03-08 00:56:09.013765 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-08 00:56:09.013768 | orchestrator | Sunday 08 March 2026 00:50:51 +0000 (0:00:01.124) 0:06:04.958 ********** 2026-03-08 00:56:09.013772 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.013776 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.013780 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.013783 | orchestrator | 2026-03-08 00:56:09.013789 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-08 00:56:09.013795 | orchestrator | Sunday 08 March 2026 00:50:51 +0000 (0:00:00.325) 0:06:05.283 ********** 2026-03-08 00:56:09.013805 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.013813 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.013819 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.013847 | orchestrator | 2026-03-08 00:56:09.013853 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-08 00:56:09.013857 | orchestrator | Sunday 08 March 2026 00:50:51 +0000 (0:00:00.314) 0:06:05.598 ********** 2026-03-08 00:56:09.013861 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:09.013865 | orchestrator | 2026-03-08 00:56:09.013868 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-08 00:56:09.013872 | orchestrator | Sunday 08 March 2026 00:50:52 +0000 (0:00:00.771) 0:06:06.369 ********** 2026-03-08 00:56:09.013876 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.013880 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.013883 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.013887 | orchestrator | 2026-03-08 00:56:09.013891 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-08 00:56:09.013894 | orchestrator | Sunday 08 March 2026 00:50:52 +0000 (0:00:00.339) 0:06:06.709 ********** 2026-03-08 00:56:09.013898 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.013902 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.013906 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.013910 | orchestrator | 2026-03-08 00:56:09.013917 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-08 00:56:09.013925 | orchestrator | Sunday 08 March 2026 00:50:53 +0000 (0:00:00.349) 0:06:07.058 ********** 2026-03-08 00:56:09.013933 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:09.013939 | orchestrator | 2026-03-08 00:56:09.013945 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-08 00:56:09.013952 | orchestrator | Sunday 08 March 2026 00:50:53 +0000 (0:00:00.507) 0:06:07.566 ********** 2026-03-08 00:56:09.013956 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.013960 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.013963 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.013967 | orchestrator | 2026-03-08 00:56:09.013971 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-08 00:56:09.013975 | orchestrator | Sunday 08 March 2026 00:50:55 +0000 (0:00:01.776) 0:06:09.342 ********** 2026-03-08 00:56:09.013983 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.013987 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.013990 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.013994 | orchestrator | 2026-03-08 00:56:09.013998 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-08 00:56:09.014002 | orchestrator | Sunday 08 March 2026 00:50:56 +0000 (0:00:01.181) 0:06:10.524 ********** 2026-03-08 00:56:09.014006 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.014049 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.014139 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.014164 | orchestrator | 2026-03-08 00:56:09.014168 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-08 00:56:09.014172 | orchestrator | Sunday 08 March 2026 00:50:58 +0000 (0:00:01.778) 0:06:12.302 ********** 2026-03-08 00:56:09.014176 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.014180 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.014184 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.014188 | orchestrator | 2026-03-08 00:56:09.014192 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-08 00:56:09.014196 | orchestrator | Sunday 08 March 2026 00:51:00 +0000 (0:00:01.974) 0:06:14.277 ********** 2026-03-08 00:56:09.014200 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.014203 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.014207 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-08 00:56:09.014211 | orchestrator | 2026-03-08 00:56:09.014215 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-08 00:56:09.014219 | orchestrator | Sunday 08 March 2026 00:51:01 +0000 (0:00:00.712) 0:06:14.989 ********** 2026-03-08 00:56:09.014246 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-08 00:56:09.014250 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-08 00:56:09.014254 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-08 00:56:09.014258 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-08 00:56:09.014261 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-08 00:56:09.014265 | orchestrator | 2026-03-08 00:56:09.014269 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-08 00:56:09.014273 | orchestrator | Sunday 08 March 2026 00:51:25 +0000 (0:00:24.134) 0:06:39.123 ********** 2026-03-08 00:56:09.014276 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-08 00:56:09.014280 | orchestrator | 2026-03-08 00:56:09.014284 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-08 00:56:09.014288 | orchestrator | Sunday 08 March 2026 00:51:26 +0000 (0:00:01.163) 0:06:40.286 ********** 2026-03-08 00:56:09.014291 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.014295 | orchestrator | 2026-03-08 00:56:09.014299 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-08 00:56:09.014303 | orchestrator | Sunday 08 March 2026 00:51:26 +0000 (0:00:00.302) 0:06:40.589 ********** 2026-03-08 00:56:09.014306 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.014310 | orchestrator | 2026-03-08 00:56:09.014314 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-08 00:56:09.014318 | orchestrator | Sunday 08 March 2026 00:51:26 +0000 (0:00:00.126) 0:06:40.715 ********** 2026-03-08 00:56:09.014321 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-08 00:56:09.014325 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-08 00:56:09.014329 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-08 00:56:09.014337 | orchestrator | 2026-03-08 00:56:09.014341 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-08 00:56:09.014345 | orchestrator | Sunday 08 March 2026 00:51:33 +0000 (0:00:06.266) 0:06:46.982 ********** 2026-03-08 00:56:09.014349 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-08 00:56:09.014352 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-08 00:56:09.014356 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-08 00:56:09.014360 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-08 00:56:09.014364 | orchestrator | 2026-03-08 00:56:09.014367 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-08 00:56:09.014371 | orchestrator | Sunday 08 March 2026 00:51:38 +0000 (0:00:05.435) 0:06:52.417 ********** 2026-03-08 00:56:09.014375 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.014379 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.014382 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.014386 | orchestrator | 2026-03-08 00:56:09.014390 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-08 00:56:09.014393 | orchestrator | Sunday 08 March 2026 00:51:39 +0000 (0:00:00.649) 0:06:53.066 ********** 2026-03-08 00:56:09.014397 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:09.014401 | orchestrator | 2026-03-08 00:56:09.014405 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-08 00:56:09.014408 | orchestrator | Sunday 08 March 2026 00:51:39 +0000 (0:00:00.474) 0:06:53.541 ********** 2026-03-08 00:56:09.014412 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.014416 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.014420 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.014423 | orchestrator | 2026-03-08 00:56:09.014427 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-08 00:56:09.014431 | orchestrator | Sunday 08 March 2026 00:51:40 +0000 (0:00:00.453) 0:06:53.994 ********** 2026-03-08 00:56:09.014435 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.014438 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.014445 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.014449 | orchestrator | 2026-03-08 00:56:09.014452 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-08 00:56:09.014456 | orchestrator | Sunday 08 March 2026 00:51:41 +0000 (0:00:01.083) 0:06:55.078 ********** 2026-03-08 00:56:09.014460 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-08 00:56:09.014464 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-08 00:56:09.014468 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-08 00:56:09.014471 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.014475 | orchestrator | 2026-03-08 00:56:09.014479 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-08 00:56:09.014483 | orchestrator | Sunday 08 March 2026 00:51:41 +0000 (0:00:00.576) 0:06:55.654 ********** 2026-03-08 00:56:09.014486 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.014490 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.014494 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.014497 | orchestrator | 2026-03-08 00:56:09.014501 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-08 00:56:09.014505 | orchestrator | 2026-03-08 00:56:09.014509 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-08 00:56:09.014512 | orchestrator | Sunday 08 March 2026 00:51:42 +0000 (0:00:00.718) 0:06:56.373 ********** 2026-03-08 00:56:09.014516 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:56:09.014520 | orchestrator | 2026-03-08 00:56:09.014538 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-08 00:56:09.014545 | orchestrator | Sunday 08 March 2026 00:51:42 +0000 (0:00:00.471) 0:06:56.845 ********** 2026-03-08 00:56:09.014549 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:56:09.014553 | orchestrator | 2026-03-08 00:56:09.014557 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-08 00:56:09.014561 | orchestrator | Sunday 08 March 2026 00:51:43 +0000 (0:00:00.454) 0:06:57.299 ********** 2026-03-08 00:56:09.014564 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.014568 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.014572 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.014576 | orchestrator | 2026-03-08 00:56:09.014579 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-08 00:56:09.014583 | orchestrator | Sunday 08 March 2026 00:51:43 +0000 (0:00:00.423) 0:06:57.723 ********** 2026-03-08 00:56:09.014587 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.014591 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.014594 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.014598 | orchestrator | 2026-03-08 00:56:09.014602 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-08 00:56:09.014608 | orchestrator | Sunday 08 March 2026 00:51:44 +0000 (0:00:00.649) 0:06:58.372 ********** 2026-03-08 00:56:09.014614 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.014622 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.014631 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.014638 | orchestrator | 2026-03-08 00:56:09.014645 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-08 00:56:09.014652 | orchestrator | Sunday 08 March 2026 00:51:45 +0000 (0:00:00.628) 0:06:59.001 ********** 2026-03-08 00:56:09.014659 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.014665 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.014671 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.014677 | orchestrator | 2026-03-08 00:56:09.014684 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-08 00:56:09.014691 | orchestrator | Sunday 08 March 2026 00:51:45 +0000 (0:00:00.603) 0:06:59.605 ********** 2026-03-08 00:56:09.014699 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.014706 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.014710 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.014713 | orchestrator | 2026-03-08 00:56:09.014717 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-08 00:56:09.014721 | orchestrator | Sunday 08 March 2026 00:51:46 +0000 (0:00:00.462) 0:07:00.068 ********** 2026-03-08 00:56:09.014725 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.014729 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.014732 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.014736 | orchestrator | 2026-03-08 00:56:09.014740 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-08 00:56:09.014746 | orchestrator | Sunday 08 March 2026 00:51:46 +0000 (0:00:00.256) 0:07:00.325 ********** 2026-03-08 00:56:09.014753 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.014759 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.014764 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.014770 | orchestrator | 2026-03-08 00:56:09.014777 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-08 00:56:09.014783 | orchestrator | Sunday 08 March 2026 00:51:46 +0000 (0:00:00.259) 0:07:00.584 ********** 2026-03-08 00:56:09.014790 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.014797 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.014802 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.014805 | orchestrator | 2026-03-08 00:56:09.014809 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-08 00:56:09.014813 | orchestrator | Sunday 08 March 2026 00:51:47 +0000 (0:00:00.648) 0:07:01.233 ********** 2026-03-08 00:56:09.014820 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.014876 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.014886 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.014892 | orchestrator | 2026-03-08 00:56:09.014898 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-08 00:56:09.014904 | orchestrator | Sunday 08 March 2026 00:51:48 +0000 (0:00:00.853) 0:07:02.087 ********** 2026-03-08 00:56:09.014910 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.014915 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.014921 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.014926 | orchestrator | 2026-03-08 00:56:09.014937 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-08 00:56:09.014943 | orchestrator | Sunday 08 March 2026 00:51:48 +0000 (0:00:00.262) 0:07:02.349 ********** 2026-03-08 00:56:09.014949 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.014955 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.014961 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.014967 | orchestrator | 2026-03-08 00:56:09.014973 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-08 00:56:09.014978 | orchestrator | Sunday 08 March 2026 00:51:48 +0000 (0:00:00.268) 0:07:02.618 ********** 2026-03-08 00:56:09.014984 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.014989 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.014994 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.015000 | orchestrator | 2026-03-08 00:56:09.015006 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-08 00:56:09.015012 | orchestrator | Sunday 08 March 2026 00:51:48 +0000 (0:00:00.277) 0:07:02.896 ********** 2026-03-08 00:56:09.015018 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.015024 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.015030 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.015036 | orchestrator | 2026-03-08 00:56:09.015042 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-08 00:56:09.015048 | orchestrator | Sunday 08 March 2026 00:51:49 +0000 (0:00:00.487) 0:07:03.383 ********** 2026-03-08 00:56:09.015055 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.015061 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.015067 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.015073 | orchestrator | 2026-03-08 00:56:09.015080 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-08 00:56:09.015092 | orchestrator | Sunday 08 March 2026 00:51:49 +0000 (0:00:00.288) 0:07:03.672 ********** 2026-03-08 00:56:09.015098 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.015104 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.015110 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.015115 | orchestrator | 2026-03-08 00:56:09.015121 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-08 00:56:09.015127 | orchestrator | Sunday 08 March 2026 00:51:50 +0000 (0:00:00.270) 0:07:03.942 ********** 2026-03-08 00:56:09.015133 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.015138 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.015144 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.015151 | orchestrator | 2026-03-08 00:56:09.015157 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-08 00:56:09.015163 | orchestrator | Sunday 08 March 2026 00:51:50 +0000 (0:00:00.281) 0:07:04.224 ********** 2026-03-08 00:56:09.015168 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.015175 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.015181 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.015187 | orchestrator | 2026-03-08 00:56:09.015194 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-08 00:56:09.015201 | orchestrator | Sunday 08 March 2026 00:51:50 +0000 (0:00:00.428) 0:07:04.652 ********** 2026-03-08 00:56:09.015207 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.015214 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.015228 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.015235 | orchestrator | 2026-03-08 00:56:09.015242 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-08 00:56:09.015248 | orchestrator | Sunday 08 March 2026 00:51:51 +0000 (0:00:00.299) 0:07:04.951 ********** 2026-03-08 00:56:09.015255 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.015261 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.015267 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.015273 | orchestrator | 2026-03-08 00:56:09.015279 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-08 00:56:09.015285 | orchestrator | Sunday 08 March 2026 00:51:51 +0000 (0:00:00.453) 0:07:05.405 ********** 2026-03-08 00:56:09.015290 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.015296 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.015301 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.015307 | orchestrator | 2026-03-08 00:56:09.015312 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-08 00:56:09.015318 | orchestrator | Sunday 08 March 2026 00:51:51 +0000 (0:00:00.451) 0:07:05.857 ********** 2026-03-08 00:56:09.015324 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-08 00:56:09.015330 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-08 00:56:09.015336 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-08 00:56:09.015342 | orchestrator | 2026-03-08 00:56:09.015349 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-08 00:56:09.015355 | orchestrator | Sunday 08 March 2026 00:51:52 +0000 (0:00:00.566) 0:07:06.423 ********** 2026-03-08 00:56:09.015362 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:56:09.015368 | orchestrator | 2026-03-08 00:56:09.015374 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-08 00:56:09.015381 | orchestrator | Sunday 08 March 2026 00:51:52 +0000 (0:00:00.471) 0:07:06.895 ********** 2026-03-08 00:56:09.015388 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.015395 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.015401 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.015408 | orchestrator | 2026-03-08 00:56:09.015414 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-08 00:56:09.015422 | orchestrator | Sunday 08 March 2026 00:51:53 +0000 (0:00:00.426) 0:07:07.321 ********** 2026-03-08 00:56:09.015428 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.015435 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.015442 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.015448 | orchestrator | 2026-03-08 00:56:09.015455 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-08 00:56:09.015462 | orchestrator | Sunday 08 March 2026 00:51:53 +0000 (0:00:00.254) 0:07:07.575 ********** 2026-03-08 00:56:09.015468 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.015475 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.015487 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.015495 | orchestrator | 2026-03-08 00:56:09.015502 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-08 00:56:09.015509 | orchestrator | Sunday 08 March 2026 00:51:54 +0000 (0:00:00.550) 0:07:08.126 ********** 2026-03-08 00:56:09.015515 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.015522 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.015529 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.015536 | orchestrator | 2026-03-08 00:56:09.015543 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-08 00:56:09.015550 | orchestrator | Sunday 08 March 2026 00:51:54 +0000 (0:00:00.299) 0:07:08.425 ********** 2026-03-08 00:56:09.015557 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-08 00:56:09.015572 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-08 00:56:09.015579 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-08 00:56:09.015586 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-08 00:56:09.015591 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-08 00:56:09.015598 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-08 00:56:09.015615 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-08 00:56:09.015621 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-08 00:56:09.015628 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-08 00:56:09.015634 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-08 00:56:09.015641 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-08 00:56:09.015647 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-08 00:56:09.015654 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-08 00:56:09.015661 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-08 00:56:09.015668 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-08 00:56:09.015675 | orchestrator | 2026-03-08 00:56:09.015681 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-08 00:56:09.015687 | orchestrator | Sunday 08 March 2026 00:51:57 +0000 (0:00:02.943) 0:07:11.369 ********** 2026-03-08 00:56:09.015694 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.015700 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.015707 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.015713 | orchestrator | 2026-03-08 00:56:09.015719 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-08 00:56:09.015725 | orchestrator | Sunday 08 March 2026 00:51:57 +0000 (0:00:00.265) 0:07:11.634 ********** 2026-03-08 00:56:09.015732 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:56:09.015739 | orchestrator | 2026-03-08 00:56:09.015745 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-08 00:56:09.015751 | orchestrator | Sunday 08 March 2026 00:51:58 +0000 (0:00:00.503) 0:07:12.138 ********** 2026-03-08 00:56:09.015759 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-08 00:56:09.015766 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-08 00:56:09.015772 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-08 00:56:09.015778 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-08 00:56:09.015785 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-08 00:56:09.015791 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-08 00:56:09.015797 | orchestrator | 2026-03-08 00:56:09.015803 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-08 00:56:09.015809 | orchestrator | Sunday 08 March 2026 00:51:59 +0000 (0:00:01.063) 0:07:13.201 ********** 2026-03-08 00:56:09.015815 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:56:09.015821 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-08 00:56:09.015853 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-08 00:56:09.015859 | orchestrator | 2026-03-08 00:56:09.015865 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-08 00:56:09.015871 | orchestrator | Sunday 08 March 2026 00:52:01 +0000 (0:00:01.837) 0:07:15.038 ********** 2026-03-08 00:56:09.015884 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-08 00:56:09.015890 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-08 00:56:09.015895 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.015900 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-08 00:56:09.015906 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-08 00:56:09.015912 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.015918 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-08 00:56:09.015923 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-08 00:56:09.015929 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.015934 | orchestrator | 2026-03-08 00:56:09.015940 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-08 00:56:09.015946 | orchestrator | Sunday 08 March 2026 00:52:02 +0000 (0:00:01.020) 0:07:16.059 ********** 2026-03-08 00:56:09.015958 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-08 00:56:09.015964 | orchestrator | 2026-03-08 00:56:09.015970 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-08 00:56:09.015976 | orchestrator | Sunday 08 March 2026 00:52:03 +0000 (0:00:01.858) 0:07:17.918 ********** 2026-03-08 00:56:09.015982 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:56:09.015988 | orchestrator | 2026-03-08 00:56:09.015994 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-08 00:56:09.016000 | orchestrator | Sunday 08 March 2026 00:52:04 +0000 (0:00:00.462) 0:07:18.380 ********** 2026-03-08 00:56:09.016007 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d02f715b-f6fc-5dd9-afa3-4d404d1973db', 'data_vg': 'ceph-d02f715b-f6fc-5dd9-afa3-4d404d1973db'}) 2026-03-08 00:56:09.016015 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a9457a91-34ca-5e42-9332-0f1ee38194fb', 'data_vg': 'ceph-a9457a91-34ca-5e42-9332-0f1ee38194fb'}) 2026-03-08 00:56:09.016021 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9742d483-d5c0-528b-aa0f-657894200b45', 'data_vg': 'ceph-9742d483-d5c0-528b-aa0f-657894200b45'}) 2026-03-08 00:56:09.016037 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-06971c7f-d1d9-5519-989d-752a08544c4e', 'data_vg': 'ceph-06971c7f-d1d9-5519-989d-752a08544c4e'}) 2026-03-08 00:56:09.016045 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ccaad6c6-3747-58dc-9b51-af637ea3a93d', 'data_vg': 'ceph-ccaad6c6-3747-58dc-9b51-af637ea3a93d'}) 2026-03-08 00:56:09.016051 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e5322502-cf2a-5eb6-8fcb-1a734f718f57', 'data_vg': 'ceph-e5322502-cf2a-5eb6-8fcb-1a734f718f57'}) 2026-03-08 00:56:09.016057 | orchestrator | 2026-03-08 00:56:09.016063 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-08 00:56:09.016069 | orchestrator | Sunday 08 March 2026 00:52:45 +0000 (0:00:41.327) 0:07:59.708 ********** 2026-03-08 00:56:09.016075 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.016081 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.016088 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.016094 | orchestrator | 2026-03-08 00:56:09.016100 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-08 00:56:09.016106 | orchestrator | Sunday 08 March 2026 00:52:46 +0000 (0:00:00.331) 0:08:00.039 ********** 2026-03-08 00:56:09.016112 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:56:09.016119 | orchestrator | 2026-03-08 00:56:09.016125 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-08 00:56:09.016131 | orchestrator | Sunday 08 March 2026 00:52:46 +0000 (0:00:00.519) 0:08:00.559 ********** 2026-03-08 00:56:09.016137 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.016144 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.016158 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.016166 | orchestrator | 2026-03-08 00:56:09.016174 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-08 00:56:09.016180 | orchestrator | Sunday 08 March 2026 00:52:47 +0000 (0:00:00.940) 0:08:01.499 ********** 2026-03-08 00:56:09.016186 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.016192 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.016198 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.016204 | orchestrator | 2026-03-08 00:56:09.016210 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-08 00:56:09.016215 | orchestrator | Sunday 08 March 2026 00:52:50 +0000 (0:00:02.643) 0:08:04.143 ********** 2026-03-08 00:56:09.016221 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:56:09.016227 | orchestrator | 2026-03-08 00:56:09.016233 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-08 00:56:09.016240 | orchestrator | Sunday 08 March 2026 00:52:50 +0000 (0:00:00.535) 0:08:04.679 ********** 2026-03-08 00:56:09.016246 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.016253 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.016259 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.016266 | orchestrator | 2026-03-08 00:56:09.016272 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-08 00:56:09.016279 | orchestrator | Sunday 08 March 2026 00:52:52 +0000 (0:00:01.486) 0:08:06.165 ********** 2026-03-08 00:56:09.016285 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.016292 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.016298 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.016304 | orchestrator | 2026-03-08 00:56:09.016310 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-08 00:56:09.016316 | orchestrator | Sunday 08 March 2026 00:52:53 +0000 (0:00:01.188) 0:08:07.354 ********** 2026-03-08 00:56:09.016322 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.016328 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.016334 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.016340 | orchestrator | 2026-03-08 00:56:09.016346 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-08 00:56:09.016352 | orchestrator | Sunday 08 March 2026 00:52:55 +0000 (0:00:01.837) 0:08:09.191 ********** 2026-03-08 00:56:09.016359 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.016365 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.016371 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.016377 | orchestrator | 2026-03-08 00:56:09.016384 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-08 00:56:09.016395 | orchestrator | Sunday 08 March 2026 00:52:55 +0000 (0:00:00.409) 0:08:09.601 ********** 2026-03-08 00:56:09.016401 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.016408 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.016414 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.016421 | orchestrator | 2026-03-08 00:56:09.016428 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-08 00:56:09.016435 | orchestrator | Sunday 08 March 2026 00:52:56 +0000 (0:00:00.650) 0:08:10.252 ********** 2026-03-08 00:56:09.016442 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-03-08 00:56:09.016448 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-03-08 00:56:09.016454 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-03-08 00:56:09.016459 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-08 00:56:09.016465 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-08 00:56:09.016471 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-08 00:56:09.016477 | orchestrator | 2026-03-08 00:56:09.016484 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-08 00:56:09.016490 | orchestrator | Sunday 08 March 2026 00:52:57 +0000 (0:00:01.001) 0:08:11.253 ********** 2026-03-08 00:56:09.016501 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-08 00:56:09.016508 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-08 00:56:09.016515 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-08 00:56:09.016521 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-08 00:56:09.016527 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-08 00:56:09.016534 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-08 00:56:09.016540 | orchestrator | 2026-03-08 00:56:09.016554 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-08 00:56:09.016560 | orchestrator | Sunday 08 March 2026 00:52:59 +0000 (0:00:02.123) 0:08:13.377 ********** 2026-03-08 00:56:09.016567 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-08 00:56:09.016573 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-08 00:56:09.016579 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-08 00:56:09.016585 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-08 00:56:09.016592 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-08 00:56:09.016598 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-08 00:56:09.016605 | orchestrator | 2026-03-08 00:56:09.016611 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-08 00:56:09.016618 | orchestrator | Sunday 08 March 2026 00:53:03 +0000 (0:00:03.728) 0:08:17.105 ********** 2026-03-08 00:56:09.016624 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.016631 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.016638 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-08 00:56:09.016645 | orchestrator | 2026-03-08 00:56:09.016651 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-08 00:56:09.016657 | orchestrator | Sunday 08 March 2026 00:53:06 +0000 (0:00:03.208) 0:08:20.313 ********** 2026-03-08 00:56:09.016664 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.016670 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.016677 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-08 00:56:09.016684 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-08 00:56:09.016690 | orchestrator | 2026-03-08 00:56:09.016697 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-08 00:56:09.016704 | orchestrator | Sunday 08 March 2026 00:53:18 +0000 (0:00:12.499) 0:08:32.813 ********** 2026-03-08 00:56:09.016710 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.016716 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.016723 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.016730 | orchestrator | 2026-03-08 00:56:09.016736 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-08 00:56:09.016742 | orchestrator | Sunday 08 March 2026 00:53:19 +0000 (0:00:01.029) 0:08:33.842 ********** 2026-03-08 00:56:09.016749 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.016756 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.016762 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.016769 | orchestrator | 2026-03-08 00:56:09.016775 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-08 00:56:09.016781 | orchestrator | Sunday 08 March 2026 00:53:20 +0000 (0:00:00.388) 0:08:34.231 ********** 2026-03-08 00:56:09.016788 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:56:09.016794 | orchestrator | 2026-03-08 00:56:09.016801 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-08 00:56:09.016807 | orchestrator | Sunday 08 March 2026 00:53:20 +0000 (0:00:00.561) 0:08:34.793 ********** 2026-03-08 00:56:09.016813 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:56:09.016819 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:56:09.016880 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:56:09.016900 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.016907 | orchestrator | 2026-03-08 00:56:09.016913 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-08 00:56:09.016919 | orchestrator | Sunday 08 March 2026 00:53:21 +0000 (0:00:00.662) 0:08:35.455 ********** 2026-03-08 00:56:09.016925 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.016931 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.016938 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.016944 | orchestrator | 2026-03-08 00:56:09.016950 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-08 00:56:09.016957 | orchestrator | Sunday 08 March 2026 00:53:22 +0000 (0:00:00.572) 0:08:36.028 ********** 2026-03-08 00:56:09.016963 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.016969 | orchestrator | 2026-03-08 00:56:09.016976 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-08 00:56:09.016983 | orchestrator | Sunday 08 March 2026 00:53:22 +0000 (0:00:00.257) 0:08:36.285 ********** 2026-03-08 00:56:09.016989 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.017002 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.017009 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.017016 | orchestrator | 2026-03-08 00:56:09.017022 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-08 00:56:09.017028 | orchestrator | Sunday 08 March 2026 00:53:22 +0000 (0:00:00.327) 0:08:36.613 ********** 2026-03-08 00:56:09.017034 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.017039 | orchestrator | 2026-03-08 00:56:09.017045 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-08 00:56:09.017051 | orchestrator | Sunday 08 March 2026 00:53:22 +0000 (0:00:00.215) 0:08:36.828 ********** 2026-03-08 00:56:09.017057 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.017064 | orchestrator | 2026-03-08 00:56:09.017071 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-08 00:56:09.017077 | orchestrator | Sunday 08 March 2026 00:53:23 +0000 (0:00:00.245) 0:08:37.074 ********** 2026-03-08 00:56:09.017084 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.017091 | orchestrator | 2026-03-08 00:56:09.017097 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-08 00:56:09.017103 | orchestrator | Sunday 08 March 2026 00:53:23 +0000 (0:00:00.126) 0:08:37.201 ********** 2026-03-08 00:56:09.017110 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.017115 | orchestrator | 2026-03-08 00:56:09.017121 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-08 00:56:09.017127 | orchestrator | Sunday 08 March 2026 00:53:23 +0000 (0:00:00.194) 0:08:37.395 ********** 2026-03-08 00:56:09.017142 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.017149 | orchestrator | 2026-03-08 00:56:09.017156 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-08 00:56:09.017163 | orchestrator | Sunday 08 March 2026 00:53:23 +0000 (0:00:00.198) 0:08:37.593 ********** 2026-03-08 00:56:09.017169 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:56:09.017176 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:56:09.017183 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:56:09.017189 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.017196 | orchestrator | 2026-03-08 00:56:09.017203 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-08 00:56:09.017209 | orchestrator | Sunday 08 March 2026 00:53:24 +0000 (0:00:01.015) 0:08:38.608 ********** 2026-03-08 00:56:09.017215 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.017222 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.017229 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.017236 | orchestrator | 2026-03-08 00:56:09.017243 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-08 00:56:09.017257 | orchestrator | Sunday 08 March 2026 00:53:25 +0000 (0:00:00.345) 0:08:38.954 ********** 2026-03-08 00:56:09.017264 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.017271 | orchestrator | 2026-03-08 00:56:09.017277 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-08 00:56:09.017283 | orchestrator | Sunday 08 March 2026 00:53:25 +0000 (0:00:00.262) 0:08:39.216 ********** 2026-03-08 00:56:09.017290 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.017296 | orchestrator | 2026-03-08 00:56:09.017303 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-08 00:56:09.017309 | orchestrator | 2026-03-08 00:56:09.017315 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-08 00:56:09.017321 | orchestrator | Sunday 08 March 2026 00:53:26 +0000 (0:00:00.749) 0:08:39.966 ********** 2026-03-08 00:56:09.017327 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:09.017335 | orchestrator | 2026-03-08 00:56:09.017341 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-08 00:56:09.017347 | orchestrator | Sunday 08 March 2026 00:53:27 +0000 (0:00:01.394) 0:08:41.360 ********** 2026-03-08 00:56:09.017353 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:09.017360 | orchestrator | 2026-03-08 00:56:09.017367 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-08 00:56:09.017373 | orchestrator | Sunday 08 March 2026 00:53:28 +0000 (0:00:01.194) 0:08:42.554 ********** 2026-03-08 00:56:09.017379 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.017386 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.017392 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.017399 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.017405 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.017411 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.017418 | orchestrator | 2026-03-08 00:56:09.017424 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-08 00:56:09.017431 | orchestrator | Sunday 08 March 2026 00:53:29 +0000 (0:00:01.220) 0:08:43.775 ********** 2026-03-08 00:56:09.017438 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.017444 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.017451 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.017457 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.017463 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.017468 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.017474 | orchestrator | 2026-03-08 00:56:09.017480 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-08 00:56:09.017485 | orchestrator | Sunday 08 March 2026 00:53:30 +0000 (0:00:00.748) 0:08:44.524 ********** 2026-03-08 00:56:09.017491 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.017498 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.017504 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.017510 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.017517 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.017524 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.017530 | orchestrator | 2026-03-08 00:56:09.017541 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-08 00:56:09.017548 | orchestrator | Sunday 08 March 2026 00:53:31 +0000 (0:00:00.991) 0:08:45.516 ********** 2026-03-08 00:56:09.017555 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.017561 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.017567 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.017573 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.017579 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.017585 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.017597 | orchestrator | 2026-03-08 00:56:09.017604 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-08 00:56:09.017610 | orchestrator | Sunday 08 March 2026 00:53:32 +0000 (0:00:00.706) 0:08:46.223 ********** 2026-03-08 00:56:09.017615 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.017621 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.017627 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.017633 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.017639 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.017645 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.017651 | orchestrator | 2026-03-08 00:56:09.017657 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-08 00:56:09.017662 | orchestrator | Sunday 08 March 2026 00:53:33 +0000 (0:00:01.266) 0:08:47.489 ********** 2026-03-08 00:56:09.017669 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.017675 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.017681 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.017687 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.017693 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.017706 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.017714 | orchestrator | 2026-03-08 00:56:09.017721 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-08 00:56:09.017727 | orchestrator | Sunday 08 March 2026 00:53:34 +0000 (0:00:00.582) 0:08:48.072 ********** 2026-03-08 00:56:09.017733 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.017738 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.017743 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.017749 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.017756 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.017763 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.017769 | orchestrator | 2026-03-08 00:56:09.017776 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-08 00:56:09.017782 | orchestrator | Sunday 08 March 2026 00:53:35 +0000 (0:00:00.863) 0:08:48.935 ********** 2026-03-08 00:56:09.017788 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.017795 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.017802 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.017808 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.017814 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.017820 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.017851 | orchestrator | 2026-03-08 00:56:09.017858 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-08 00:56:09.017864 | orchestrator | Sunday 08 March 2026 00:53:36 +0000 (0:00:01.092) 0:08:50.027 ********** 2026-03-08 00:56:09.017870 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.017894 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.017900 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.017907 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.017913 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.017919 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.017926 | orchestrator | 2026-03-08 00:56:09.017932 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-08 00:56:09.017938 | orchestrator | Sunday 08 March 2026 00:53:37 +0000 (0:00:01.362) 0:08:51.389 ********** 2026-03-08 00:56:09.017944 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.017951 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.017957 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.017964 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.017971 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.017978 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.017984 | orchestrator | 2026-03-08 00:56:09.017991 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-08 00:56:09.017998 | orchestrator | Sunday 08 March 2026 00:53:38 +0000 (0:00:00.627) 0:08:52.016 ********** 2026-03-08 00:56:09.018004 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.018083 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.018094 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.018111 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.018117 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.018124 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.018130 | orchestrator | 2026-03-08 00:56:09.018137 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-08 00:56:09.018143 | orchestrator | Sunday 08 March 2026 00:53:38 +0000 (0:00:00.904) 0:08:52.921 ********** 2026-03-08 00:56:09.018149 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.018155 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.018161 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.018167 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.018174 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.018180 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.018187 | orchestrator | 2026-03-08 00:56:09.018193 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-08 00:56:09.018200 | orchestrator | Sunday 08 March 2026 00:53:39 +0000 (0:00:00.641) 0:08:53.562 ********** 2026-03-08 00:56:09.018206 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.018212 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.018218 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.018224 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.018231 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.018237 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.018244 | orchestrator | 2026-03-08 00:56:09.018251 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-08 00:56:09.018258 | orchestrator | Sunday 08 March 2026 00:53:40 +0000 (0:00:00.928) 0:08:54.491 ********** 2026-03-08 00:56:09.018264 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.018271 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.018277 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.018284 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.018291 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.018298 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.018304 | orchestrator | 2026-03-08 00:56:09.018316 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-08 00:56:09.018323 | orchestrator | Sunday 08 March 2026 00:53:41 +0000 (0:00:00.625) 0:08:55.117 ********** 2026-03-08 00:56:09.018328 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.018334 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.018340 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.018346 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.018352 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.018358 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.018364 | orchestrator | 2026-03-08 00:56:09.018370 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-08 00:56:09.018377 | orchestrator | Sunday 08 March 2026 00:53:42 +0000 (0:00:00.926) 0:08:56.043 ********** 2026-03-08 00:56:09.018383 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.018389 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.018396 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.018402 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:09.018408 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:09.018414 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:09.018432 | orchestrator | 2026-03-08 00:56:09.018439 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-08 00:56:09.018445 | orchestrator | Sunday 08 March 2026 00:53:42 +0000 (0:00:00.619) 0:08:56.662 ********** 2026-03-08 00:56:09.018451 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.018457 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.018464 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.018484 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.018490 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.018503 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.018509 | orchestrator | 2026-03-08 00:56:09.018516 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-08 00:56:09.018523 | orchestrator | Sunday 08 March 2026 00:53:43 +0000 (0:00:00.869) 0:08:57.531 ********** 2026-03-08 00:56:09.018529 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.018535 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.018542 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.018548 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.018555 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.018561 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.018567 | orchestrator | 2026-03-08 00:56:09.018574 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-08 00:56:09.018580 | orchestrator | Sunday 08 March 2026 00:53:44 +0000 (0:00:00.672) 0:08:58.203 ********** 2026-03-08 00:56:09.018586 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.018593 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.018599 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.018604 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.018611 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.018617 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.018623 | orchestrator | 2026-03-08 00:56:09.018628 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-08 00:56:09.018634 | orchestrator | Sunday 08 March 2026 00:53:45 +0000 (0:00:01.252) 0:08:59.456 ********** 2026-03-08 00:56:09.018641 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-08 00:56:09.018648 | orchestrator | 2026-03-08 00:56:09.018655 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-08 00:56:09.018661 | orchestrator | Sunday 08 March 2026 00:53:49 +0000 (0:00:03.961) 0:09:03.417 ********** 2026-03-08 00:56:09.018667 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-08 00:56:09.018674 | orchestrator | 2026-03-08 00:56:09.018680 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-08 00:56:09.018687 | orchestrator | Sunday 08 March 2026 00:53:51 +0000 (0:00:01.978) 0:09:05.396 ********** 2026-03-08 00:56:09.018693 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.018699 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.018705 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.018712 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.018718 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.018725 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.018731 | orchestrator | 2026-03-08 00:56:09.018737 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-08 00:56:09.018744 | orchestrator | Sunday 08 March 2026 00:53:53 +0000 (0:00:01.848) 0:09:07.245 ********** 2026-03-08 00:56:09.018750 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.018756 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.018762 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.018768 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.018775 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.018781 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.018787 | orchestrator | 2026-03-08 00:56:09.018793 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-08 00:56:09.018799 | orchestrator | Sunday 08 March 2026 00:53:54 +0000 (0:00:00.980) 0:09:08.226 ********** 2026-03-08 00:56:09.018806 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:09.018814 | orchestrator | 2026-03-08 00:56:09.018821 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-08 00:56:09.018847 | orchestrator | Sunday 08 March 2026 00:53:55 +0000 (0:00:01.275) 0:09:09.501 ********** 2026-03-08 00:56:09.018853 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.018859 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.018872 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.018879 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.018885 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.018891 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.018897 | orchestrator | 2026-03-08 00:56:09.018904 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-08 00:56:09.018910 | orchestrator | Sunday 08 March 2026 00:53:57 +0000 (0:00:01.895) 0:09:11.397 ********** 2026-03-08 00:56:09.018917 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.018923 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.018930 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.018937 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.018950 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.018957 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.018963 | orchestrator | 2026-03-08 00:56:09.018969 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-08 00:56:09.018976 | orchestrator | Sunday 08 March 2026 00:54:01 +0000 (0:00:04.201) 0:09:15.598 ********** 2026-03-08 00:56:09.018984 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:09.018991 | orchestrator | 2026-03-08 00:56:09.018997 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-08 00:56:09.019004 | orchestrator | Sunday 08 March 2026 00:54:02 +0000 (0:00:01.140) 0:09:16.739 ********** 2026-03-08 00:56:09.019011 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.019017 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.019023 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.019029 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.019035 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.019041 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.019047 | orchestrator | 2026-03-08 00:56:09.019054 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-08 00:56:09.019060 | orchestrator | Sunday 08 March 2026 00:54:03 +0000 (0:00:01.006) 0:09:17.745 ********** 2026-03-08 00:56:09.019067 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.019073 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.019079 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.019094 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:09.019100 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:09.019107 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:09.019113 | orchestrator | 2026-03-08 00:56:09.019120 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-08 00:56:09.019126 | orchestrator | Sunday 08 March 2026 00:54:06 +0000 (0:00:02.367) 0:09:20.113 ********** 2026-03-08 00:56:09.019133 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.019139 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.019160 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.019169 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:09.019176 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:09.019184 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:09.019191 | orchestrator | 2026-03-08 00:56:09.019198 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-08 00:56:09.019205 | orchestrator | 2026-03-08 00:56:09.019211 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-08 00:56:09.019218 | orchestrator | Sunday 08 March 2026 00:54:07 +0000 (0:00:01.168) 0:09:21.282 ********** 2026-03-08 00:56:09.019225 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-5, testbed-node-4 2026-03-08 00:56:09.019231 | orchestrator | 2026-03-08 00:56:09.019237 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-08 00:56:09.019244 | orchestrator | Sunday 08 March 2026 00:54:07 +0000 (0:00:00.521) 0:09:21.803 ********** 2026-03-08 00:56:09.019258 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:56:09.019265 | orchestrator | 2026-03-08 00:56:09.019272 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-08 00:56:09.019280 | orchestrator | Sunday 08 March 2026 00:54:08 +0000 (0:00:00.978) 0:09:22.782 ********** 2026-03-08 00:56:09.019287 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.019294 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.019301 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.019308 | orchestrator | 2026-03-08 00:56:09.019314 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-08 00:56:09.019321 | orchestrator | Sunday 08 March 2026 00:54:09 +0000 (0:00:00.429) 0:09:23.211 ********** 2026-03-08 00:56:09.019328 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.019335 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.019342 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.019349 | orchestrator | 2026-03-08 00:56:09.019356 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-08 00:56:09.019363 | orchestrator | Sunday 08 March 2026 00:54:09 +0000 (0:00:00.698) 0:09:23.910 ********** 2026-03-08 00:56:09.019369 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.019376 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.019384 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.019391 | orchestrator | 2026-03-08 00:56:09.019398 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-08 00:56:09.019405 | orchestrator | Sunday 08 March 2026 00:54:11 +0000 (0:00:01.201) 0:09:25.112 ********** 2026-03-08 00:56:09.019413 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.019429 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.019436 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.019442 | orchestrator | 2026-03-08 00:56:09.019449 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-08 00:56:09.019456 | orchestrator | Sunday 08 March 2026 00:54:12 +0000 (0:00:00.941) 0:09:26.053 ********** 2026-03-08 00:56:09.019462 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.019469 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.019475 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.019482 | orchestrator | 2026-03-08 00:56:09.019489 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-08 00:56:09.019495 | orchestrator | Sunday 08 March 2026 00:54:12 +0000 (0:00:00.374) 0:09:26.427 ********** 2026-03-08 00:56:09.019510 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.019517 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.019523 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.019530 | orchestrator | 2026-03-08 00:56:09.019536 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-08 00:56:09.019543 | orchestrator | Sunday 08 March 2026 00:54:12 +0000 (0:00:00.357) 0:09:26.785 ********** 2026-03-08 00:56:09.019549 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.019556 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.019561 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.019568 | orchestrator | 2026-03-08 00:56:09.019581 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-08 00:56:09.019587 | orchestrator | Sunday 08 March 2026 00:54:13 +0000 (0:00:00.633) 0:09:27.418 ********** 2026-03-08 00:56:09.019594 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.019601 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.019607 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.019613 | orchestrator | 2026-03-08 00:56:09.019619 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-08 00:56:09.019625 | orchestrator | Sunday 08 March 2026 00:54:14 +0000 (0:00:00.728) 0:09:28.147 ********** 2026-03-08 00:56:09.019631 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.019637 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.019643 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.019655 | orchestrator | 2026-03-08 00:56:09.019662 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-08 00:56:09.019668 | orchestrator | Sunday 08 March 2026 00:54:14 +0000 (0:00:00.743) 0:09:28.890 ********** 2026-03-08 00:56:09.019674 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.019681 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.019687 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.019692 | orchestrator | 2026-03-08 00:56:09.019698 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-08 00:56:09.019704 | orchestrator | Sunday 08 March 2026 00:54:15 +0000 (0:00:00.343) 0:09:29.234 ********** 2026-03-08 00:56:09.019710 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.019717 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.019723 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.019729 | orchestrator | 2026-03-08 00:56:09.019745 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-08 00:56:09.019752 | orchestrator | Sunday 08 March 2026 00:54:15 +0000 (0:00:00.607) 0:09:29.841 ********** 2026-03-08 00:56:09.019759 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.019765 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.019771 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.019778 | orchestrator | 2026-03-08 00:56:09.019784 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-08 00:56:09.019791 | orchestrator | Sunday 08 March 2026 00:54:16 +0000 (0:00:00.347) 0:09:30.189 ********** 2026-03-08 00:56:09.019797 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.019803 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.019809 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.019816 | orchestrator | 2026-03-08 00:56:09.019822 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-08 00:56:09.019885 | orchestrator | Sunday 08 March 2026 00:54:16 +0000 (0:00:00.329) 0:09:30.518 ********** 2026-03-08 00:56:09.019892 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.019898 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.019905 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.019911 | orchestrator | 2026-03-08 00:56:09.019918 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-08 00:56:09.019925 | orchestrator | Sunday 08 March 2026 00:54:16 +0000 (0:00:00.333) 0:09:30.852 ********** 2026-03-08 00:56:09.019932 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.019939 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.019946 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.019953 | orchestrator | 2026-03-08 00:56:09.019960 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-08 00:56:09.019967 | orchestrator | Sunday 08 March 2026 00:54:17 +0000 (0:00:00.594) 0:09:31.447 ********** 2026-03-08 00:56:09.019973 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.019980 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.019986 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.019992 | orchestrator | 2026-03-08 00:56:09.019998 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-08 00:56:09.020004 | orchestrator | Sunday 08 March 2026 00:54:17 +0000 (0:00:00.302) 0:09:31.750 ********** 2026-03-08 00:56:09.020010 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.020017 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.020022 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.020028 | orchestrator | 2026-03-08 00:56:09.020035 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-08 00:56:09.020042 | orchestrator | Sunday 08 March 2026 00:54:18 +0000 (0:00:00.290) 0:09:32.041 ********** 2026-03-08 00:56:09.020048 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.020055 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.020062 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.020068 | orchestrator | 2026-03-08 00:56:09.020074 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-08 00:56:09.020091 | orchestrator | Sunday 08 March 2026 00:54:18 +0000 (0:00:00.331) 0:09:32.372 ********** 2026-03-08 00:56:09.020098 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.020104 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.020109 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.020116 | orchestrator | 2026-03-08 00:56:09.020122 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-08 00:56:09.020128 | orchestrator | Sunday 08 March 2026 00:54:19 +0000 (0:00:00.838) 0:09:33.211 ********** 2026-03-08 00:56:09.020134 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.020141 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.020147 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-08 00:56:09.020153 | orchestrator | 2026-03-08 00:56:09.020160 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-08 00:56:09.020166 | orchestrator | Sunday 08 March 2026 00:54:19 +0000 (0:00:00.390) 0:09:33.602 ********** 2026-03-08 00:56:09.020173 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-08 00:56:09.020180 | orchestrator | 2026-03-08 00:56:09.020187 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-08 00:56:09.020193 | orchestrator | Sunday 08 March 2026 00:54:21 +0000 (0:00:02.198) 0:09:35.800 ********** 2026-03-08 00:56:09.020209 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-08 00:56:09.020219 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.020225 | orchestrator | 2026-03-08 00:56:09.020231 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-08 00:56:09.020237 | orchestrator | Sunday 08 March 2026 00:54:22 +0000 (0:00:00.211) 0:09:36.012 ********** 2026-03-08 00:56:09.020245 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-08 00:56:09.020258 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-08 00:56:09.020264 | orchestrator | 2026-03-08 00:56:09.020270 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-08 00:56:09.020276 | orchestrator | Sunday 08 March 2026 00:54:30 +0000 (0:00:08.850) 0:09:44.863 ********** 2026-03-08 00:56:09.020283 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-08 00:56:09.020289 | orchestrator | 2026-03-08 00:56:09.020304 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-08 00:56:09.020310 | orchestrator | Sunday 08 March 2026 00:54:34 +0000 (0:00:03.624) 0:09:48.487 ********** 2026-03-08 00:56:09.020316 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:56:09.020323 | orchestrator | 2026-03-08 00:56:09.020329 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-08 00:56:09.020335 | orchestrator | Sunday 08 March 2026 00:54:35 +0000 (0:00:00.737) 0:09:49.224 ********** 2026-03-08 00:56:09.020341 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-08 00:56:09.020348 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-08 00:56:09.020354 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-08 00:56:09.020360 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-08 00:56:09.020367 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-08 00:56:09.020385 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-08 00:56:09.020391 | orchestrator | 2026-03-08 00:56:09.020398 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-08 00:56:09.020404 | orchestrator | Sunday 08 March 2026 00:54:36 +0000 (0:00:01.044) 0:09:50.269 ********** 2026-03-08 00:56:09.020411 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:56:09.020417 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-08 00:56:09.020424 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-08 00:56:09.020430 | orchestrator | 2026-03-08 00:56:09.020437 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-08 00:56:09.020443 | orchestrator | Sunday 08 March 2026 00:54:38 +0000 (0:00:02.456) 0:09:52.726 ********** 2026-03-08 00:56:09.020450 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-08 00:56:09.020457 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-08 00:56:09.020464 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.020470 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-08 00:56:09.020477 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-08 00:56:09.020484 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.020490 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-08 00:56:09.020497 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-08 00:56:09.020503 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.020510 | orchestrator | 2026-03-08 00:56:09.020516 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-08 00:56:09.020524 | orchestrator | Sunday 08 March 2026 00:54:40 +0000 (0:00:01.545) 0:09:54.272 ********** 2026-03-08 00:56:09.020530 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.020536 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.020543 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.020550 | orchestrator | 2026-03-08 00:56:09.020556 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-08 00:56:09.020563 | orchestrator | Sunday 08 March 2026 00:54:42 +0000 (0:00:02.648) 0:09:56.921 ********** 2026-03-08 00:56:09.020570 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.020576 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.020583 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.020590 | orchestrator | 2026-03-08 00:56:09.020596 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-08 00:56:09.020604 | orchestrator | Sunday 08 March 2026 00:54:43 +0000 (0:00:00.408) 0:09:57.330 ********** 2026-03-08 00:56:09.020611 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:56:09.020617 | orchestrator | 2026-03-08 00:56:09.020623 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-08 00:56:09.020629 | orchestrator | Sunday 08 March 2026 00:54:44 +0000 (0:00:00.818) 0:09:58.148 ********** 2026-03-08 00:56:09.020634 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:56:09.020640 | orchestrator | 2026-03-08 00:56:09.020645 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-08 00:56:09.020657 | orchestrator | Sunday 08 March 2026 00:54:44 +0000 (0:00:00.549) 0:09:58.698 ********** 2026-03-08 00:56:09.020663 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.020670 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.020676 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.020682 | orchestrator | 2026-03-08 00:56:09.020689 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-08 00:56:09.020696 | orchestrator | Sunday 08 March 2026 00:54:45 +0000 (0:00:01.184) 0:09:59.883 ********** 2026-03-08 00:56:09.020702 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.020714 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.020721 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.020727 | orchestrator | 2026-03-08 00:56:09.020734 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-08 00:56:09.020740 | orchestrator | Sunday 08 March 2026 00:54:47 +0000 (0:00:01.462) 0:10:01.345 ********** 2026-03-08 00:56:09.020747 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.020753 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.020759 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.020766 | orchestrator | 2026-03-08 00:56:09.020772 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-08 00:56:09.020777 | orchestrator | Sunday 08 March 2026 00:54:49 +0000 (0:00:01.804) 0:10:03.150 ********** 2026-03-08 00:56:09.020783 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.020788 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.020793 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.020800 | orchestrator | 2026-03-08 00:56:09.020813 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-08 00:56:09.020821 | orchestrator | Sunday 08 March 2026 00:54:51 +0000 (0:00:01.943) 0:10:05.094 ********** 2026-03-08 00:56:09.020851 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.020858 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.020864 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.020870 | orchestrator | 2026-03-08 00:56:09.020877 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-08 00:56:09.020884 | orchestrator | Sunday 08 March 2026 00:54:52 +0000 (0:00:01.461) 0:10:06.555 ********** 2026-03-08 00:56:09.020891 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.020897 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.020903 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.020910 | orchestrator | 2026-03-08 00:56:09.020916 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-08 00:56:09.020922 | orchestrator | Sunday 08 March 2026 00:54:53 +0000 (0:00:00.693) 0:10:07.249 ********** 2026-03-08 00:56:09.020929 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:56:09.020936 | orchestrator | 2026-03-08 00:56:09.020942 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-08 00:56:09.020949 | orchestrator | Sunday 08 March 2026 00:54:54 +0000 (0:00:00.759) 0:10:08.008 ********** 2026-03-08 00:56:09.020956 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.020962 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.020969 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.020975 | orchestrator | 2026-03-08 00:56:09.020982 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-08 00:56:09.020988 | orchestrator | Sunday 08 March 2026 00:54:54 +0000 (0:00:00.357) 0:10:08.365 ********** 2026-03-08 00:56:09.020995 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.021002 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.021008 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.021015 | orchestrator | 2026-03-08 00:56:09.021022 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-08 00:56:09.021028 | orchestrator | Sunday 08 March 2026 00:54:55 +0000 (0:00:01.203) 0:10:09.569 ********** 2026-03-08 00:56:09.021035 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:56:09.021042 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:56:09.021048 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:56:09.021055 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.021062 | orchestrator | 2026-03-08 00:56:09.021068 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-08 00:56:09.021075 | orchestrator | Sunday 08 March 2026 00:54:56 +0000 (0:00:00.855) 0:10:10.424 ********** 2026-03-08 00:56:09.021082 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.021094 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.021100 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.021107 | orchestrator | 2026-03-08 00:56:09.021114 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-08 00:56:09.021120 | orchestrator | 2026-03-08 00:56:09.021127 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-08 00:56:09.021133 | orchestrator | Sunday 08 March 2026 00:54:57 +0000 (0:00:00.806) 0:10:11.231 ********** 2026-03-08 00:56:09.021140 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:56:09.021147 | orchestrator | 2026-03-08 00:56:09.021154 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-08 00:56:09.021160 | orchestrator | Sunday 08 March 2026 00:54:57 +0000 (0:00:00.522) 0:10:11.753 ********** 2026-03-08 00:56:09.021167 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:56:09.021173 | orchestrator | 2026-03-08 00:56:09.021179 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-08 00:56:09.021186 | orchestrator | Sunday 08 March 2026 00:54:58 +0000 (0:00:00.762) 0:10:12.516 ********** 2026-03-08 00:56:09.021193 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.021199 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.021206 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.021213 | orchestrator | 2026-03-08 00:56:09.021220 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-08 00:56:09.021231 | orchestrator | Sunday 08 March 2026 00:54:58 +0000 (0:00:00.314) 0:10:12.830 ********** 2026-03-08 00:56:09.021238 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.021244 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.021251 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.021257 | orchestrator | 2026-03-08 00:56:09.021264 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-08 00:56:09.021270 | orchestrator | Sunday 08 March 2026 00:54:59 +0000 (0:00:00.678) 0:10:13.509 ********** 2026-03-08 00:56:09.021277 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.021283 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.021290 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.021296 | orchestrator | 2026-03-08 00:56:09.021303 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-08 00:56:09.021309 | orchestrator | Sunday 08 March 2026 00:55:00 +0000 (0:00:00.730) 0:10:14.239 ********** 2026-03-08 00:56:09.021314 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.021320 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.021326 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.021333 | orchestrator | 2026-03-08 00:56:09.021339 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-08 00:56:09.021346 | orchestrator | Sunday 08 March 2026 00:55:01 +0000 (0:00:01.161) 0:10:15.401 ********** 2026-03-08 00:56:09.021352 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.021359 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.021366 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.021373 | orchestrator | 2026-03-08 00:56:09.021379 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-08 00:56:09.021392 | orchestrator | Sunday 08 March 2026 00:55:01 +0000 (0:00:00.325) 0:10:15.727 ********** 2026-03-08 00:56:09.021399 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.021405 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.021412 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.021419 | orchestrator | 2026-03-08 00:56:09.021425 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-08 00:56:09.021432 | orchestrator | Sunday 08 March 2026 00:55:02 +0000 (0:00:00.383) 0:10:16.111 ********** 2026-03-08 00:56:09.021438 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.021461 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.021467 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.021474 | orchestrator | 2026-03-08 00:56:09.021481 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-08 00:56:09.021487 | orchestrator | Sunday 08 March 2026 00:55:02 +0000 (0:00:00.342) 0:10:16.454 ********** 2026-03-08 00:56:09.021494 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.021501 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.021508 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.021514 | orchestrator | 2026-03-08 00:56:09.021521 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-08 00:56:09.021528 | orchestrator | Sunday 08 March 2026 00:55:03 +0000 (0:00:01.082) 0:10:17.536 ********** 2026-03-08 00:56:09.021535 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.021541 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.021548 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.021555 | orchestrator | 2026-03-08 00:56:09.021561 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-08 00:56:09.021567 | orchestrator | Sunday 08 March 2026 00:55:04 +0000 (0:00:00.764) 0:10:18.300 ********** 2026-03-08 00:56:09.021572 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.021578 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.021585 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.021591 | orchestrator | 2026-03-08 00:56:09.021598 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-08 00:56:09.021604 | orchestrator | Sunday 08 March 2026 00:55:04 +0000 (0:00:00.300) 0:10:18.601 ********** 2026-03-08 00:56:09.021611 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.021618 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.021624 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.021630 | orchestrator | 2026-03-08 00:56:09.021637 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-08 00:56:09.021644 | orchestrator | Sunday 08 March 2026 00:55:05 +0000 (0:00:00.347) 0:10:18.949 ********** 2026-03-08 00:56:09.021651 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.021657 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.021664 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.021670 | orchestrator | 2026-03-08 00:56:09.021677 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-08 00:56:09.021683 | orchestrator | Sunday 08 March 2026 00:55:05 +0000 (0:00:00.684) 0:10:19.633 ********** 2026-03-08 00:56:09.021690 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.021697 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.021703 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.021709 | orchestrator | 2026-03-08 00:56:09.021716 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-08 00:56:09.021723 | orchestrator | Sunday 08 March 2026 00:55:06 +0000 (0:00:00.354) 0:10:19.987 ********** 2026-03-08 00:56:09.021730 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.021736 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.021742 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.021749 | orchestrator | 2026-03-08 00:56:09.021755 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-08 00:56:09.021761 | orchestrator | Sunday 08 March 2026 00:55:06 +0000 (0:00:00.318) 0:10:20.305 ********** 2026-03-08 00:56:09.021768 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.021775 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.021781 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.021788 | orchestrator | 2026-03-08 00:56:09.021795 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-08 00:56:09.021801 | orchestrator | Sunday 08 March 2026 00:55:06 +0000 (0:00:00.317) 0:10:20.623 ********** 2026-03-08 00:56:09.021808 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.021814 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.021821 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.021855 | orchestrator | 2026-03-08 00:56:09.021862 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-08 00:56:09.021868 | orchestrator | Sunday 08 March 2026 00:55:07 +0000 (0:00:00.580) 0:10:21.204 ********** 2026-03-08 00:56:09.021874 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.021884 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.021891 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.021897 | orchestrator | 2026-03-08 00:56:09.021904 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-08 00:56:09.021910 | orchestrator | Sunday 08 March 2026 00:55:07 +0000 (0:00:00.379) 0:10:21.583 ********** 2026-03-08 00:56:09.021917 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.021924 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.021930 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.021936 | orchestrator | 2026-03-08 00:56:09.021943 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-08 00:56:09.021949 | orchestrator | Sunday 08 March 2026 00:55:08 +0000 (0:00:00.373) 0:10:21.956 ********** 2026-03-08 00:56:09.021956 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.021962 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.021969 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.021975 | orchestrator | 2026-03-08 00:56:09.021982 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-08 00:56:09.021988 | orchestrator | Sunday 08 March 2026 00:55:08 +0000 (0:00:00.821) 0:10:22.777 ********** 2026-03-08 00:56:09.021994 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:56:09.022000 | orchestrator | 2026-03-08 00:56:09.022006 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-08 00:56:09.022053 | orchestrator | Sunday 08 March 2026 00:55:09 +0000 (0:00:00.614) 0:10:23.392 ********** 2026-03-08 00:56:09.022070 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:56:09.022077 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-08 00:56:09.022083 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-08 00:56:09.022091 | orchestrator | 2026-03-08 00:56:09.022098 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-08 00:56:09.022105 | orchestrator | Sunday 08 March 2026 00:55:11 +0000 (0:00:02.293) 0:10:25.686 ********** 2026-03-08 00:56:09.022112 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-08 00:56:09.022120 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-08 00:56:09.022127 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.022134 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-08 00:56:09.022141 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-08 00:56:09.022148 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.022155 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-08 00:56:09.022164 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-08 00:56:09.022171 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.022178 | orchestrator | 2026-03-08 00:56:09.022185 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-08 00:56:09.022192 | orchestrator | Sunday 08 March 2026 00:55:13 +0000 (0:00:01.512) 0:10:27.198 ********** 2026-03-08 00:56:09.022199 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.022206 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.022213 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.022220 | orchestrator | 2026-03-08 00:56:09.022226 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-08 00:56:09.022233 | orchestrator | Sunday 08 March 2026 00:55:13 +0000 (0:00:00.332) 0:10:27.530 ********** 2026-03-08 00:56:09.022240 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:56:09.022247 | orchestrator | 2026-03-08 00:56:09.022255 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-08 00:56:09.022269 | orchestrator | Sunday 08 March 2026 00:55:14 +0000 (0:00:00.592) 0:10:28.123 ********** 2026-03-08 00:56:09.022276 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-08 00:56:09.022285 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-08 00:56:09.022292 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-08 00:56:09.022300 | orchestrator | 2026-03-08 00:56:09.022307 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-08 00:56:09.022314 | orchestrator | Sunday 08 March 2026 00:55:15 +0000 (0:00:01.313) 0:10:29.436 ********** 2026-03-08 00:56:09.022321 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:56:09.022327 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-08 00:56:09.022334 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:56:09.022342 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-08 00:56:09.022350 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:56:09.022358 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-08 00:56:09.022365 | orchestrator | 2026-03-08 00:56:09.022373 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-08 00:56:09.022380 | orchestrator | Sunday 08 March 2026 00:55:19 +0000 (0:00:04.390) 0:10:33.827 ********** 2026-03-08 00:56:09.022391 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:56:09.022398 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-08 00:56:09.022405 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:56:09.022412 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-08 00:56:09.022419 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:56:09.022427 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-08 00:56:09.022434 | orchestrator | 2026-03-08 00:56:09.022441 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-08 00:56:09.022448 | orchestrator | Sunday 08 March 2026 00:55:22 +0000 (0:00:02.582) 0:10:36.410 ********** 2026-03-08 00:56:09.022455 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-08 00:56:09.022462 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.022469 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-08 00:56:09.022475 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.022482 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-08 00:56:09.022489 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.022495 | orchestrator | 2026-03-08 00:56:09.022501 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-08 00:56:09.022508 | orchestrator | Sunday 08 March 2026 00:55:23 +0000 (0:00:01.280) 0:10:37.691 ********** 2026-03-08 00:56:09.022522 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-08 00:56:09.022529 | orchestrator | 2026-03-08 00:56:09.022537 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-08 00:56:09.022544 | orchestrator | Sunday 08 March 2026 00:55:24 +0000 (0:00:00.277) 0:10:37.969 ********** 2026-03-08 00:56:09.022557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-08 00:56:09.022565 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-08 00:56:09.022572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-08 00:56:09.022579 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-08 00:56:09.022586 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-08 00:56:09.022593 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.022600 | orchestrator | 2026-03-08 00:56:09.022607 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-08 00:56:09.022614 | orchestrator | Sunday 08 March 2026 00:55:25 +0000 (0:00:01.108) 0:10:39.078 ********** 2026-03-08 00:56:09.022621 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-08 00:56:09.022628 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-08 00:56:09.022635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-08 00:56:09.022643 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-08 00:56:09.022650 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-08 00:56:09.022657 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.022664 | orchestrator | 2026-03-08 00:56:09.022671 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-08 00:56:09.022678 | orchestrator | Sunday 08 March 2026 00:55:25 +0000 (0:00:00.540) 0:10:39.618 ********** 2026-03-08 00:56:09.022685 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-08 00:56:09.022692 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-08 00:56:09.022700 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-08 00:56:09.022707 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-08 00:56:09.022715 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-08 00:56:09.022722 | orchestrator | 2026-03-08 00:56:09.022729 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-08 00:56:09.022736 | orchestrator | Sunday 08 March 2026 00:55:55 +0000 (0:00:30.249) 0:11:09.867 ********** 2026-03-08 00:56:09.022743 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.022754 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.022762 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.022768 | orchestrator | 2026-03-08 00:56:09.022776 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-08 00:56:09.022783 | orchestrator | Sunday 08 March 2026 00:55:56 +0000 (0:00:00.361) 0:11:10.229 ********** 2026-03-08 00:56:09.022791 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.022798 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.022813 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.022820 | orchestrator | 2026-03-08 00:56:09.022880 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-08 00:56:09.022887 | orchestrator | Sunday 08 March 2026 00:55:56 +0000 (0:00:00.344) 0:11:10.573 ********** 2026-03-08 00:56:09.022894 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:56:09.022900 | orchestrator | 2026-03-08 00:56:09.022907 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-08 00:56:09.022913 | orchestrator | Sunday 08 March 2026 00:55:57 +0000 (0:00:00.812) 0:11:11.385 ********** 2026-03-08 00:56:09.022919 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:56:09.022926 | orchestrator | 2026-03-08 00:56:09.022932 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-08 00:56:09.022939 | orchestrator | Sunday 08 March 2026 00:55:57 +0000 (0:00:00.532) 0:11:11.917 ********** 2026-03-08 00:56:09.022953 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.022960 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.022967 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.022974 | orchestrator | 2026-03-08 00:56:09.022981 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-08 00:56:09.022987 | orchestrator | Sunday 08 March 2026 00:55:59 +0000 (0:00:01.097) 0:11:13.015 ********** 2026-03-08 00:56:09.022993 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.023000 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.023006 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.023013 | orchestrator | 2026-03-08 00:56:09.023020 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-08 00:56:09.023026 | orchestrator | Sunday 08 March 2026 00:56:00 +0000 (0:00:01.206) 0:11:14.222 ********** 2026-03-08 00:56:09.023033 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:56:09.023039 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:56:09.023046 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:56:09.023052 | orchestrator | 2026-03-08 00:56:09.023059 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-08 00:56:09.023065 | orchestrator | Sunday 08 March 2026 00:56:02 +0000 (0:00:01.739) 0:11:15.961 ********** 2026-03-08 00:56:09.023071 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-08 00:56:09.023078 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-08 00:56:09.023084 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-08 00:56:09.023089 | orchestrator | 2026-03-08 00:56:09.023095 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-08 00:56:09.023101 | orchestrator | Sunday 08 March 2026 00:56:04 +0000 (0:00:02.638) 0:11:18.599 ********** 2026-03-08 00:56:09.023107 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.023114 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.023120 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.023127 | orchestrator | 2026-03-08 00:56:09.023134 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-08 00:56:09.023141 | orchestrator | Sunday 08 March 2026 00:56:05 +0000 (0:00:00.382) 0:11:18.982 ********** 2026-03-08 00:56:09.023147 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:56:09.023154 | orchestrator | 2026-03-08 00:56:09.023160 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-08 00:56:09.023166 | orchestrator | Sunday 08 March 2026 00:56:05 +0000 (0:00:00.518) 0:11:19.501 ********** 2026-03-08 00:56:09.023180 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.023187 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.023192 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.023198 | orchestrator | 2026-03-08 00:56:09.023204 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-08 00:56:09.023211 | orchestrator | Sunday 08 March 2026 00:56:06 +0000 (0:00:00.618) 0:11:20.119 ********** 2026-03-08 00:56:09.023218 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.023223 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:56:09.023229 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:56:09.023234 | orchestrator | 2026-03-08 00:56:09.023240 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-08 00:56:09.023246 | orchestrator | Sunday 08 March 2026 00:56:06 +0000 (0:00:00.385) 0:11:20.505 ********** 2026-03-08 00:56:09.023252 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:56:09.023259 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:56:09.023266 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:56:09.023272 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:56:09.023279 | orchestrator | 2026-03-08 00:56:09.023285 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-08 00:56:09.023292 | orchestrator | Sunday 08 March 2026 00:56:07 +0000 (0:00:00.623) 0:11:21.128 ********** 2026-03-08 00:56:09.023299 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:56:09.023305 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:56:09.023311 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:56:09.023317 | orchestrator | 2026-03-08 00:56:09.023327 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:56:09.023334 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-08 00:56:09.023342 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-08 00:56:09.023348 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-08 00:56:09.023354 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-08 00:56:09.023361 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-08 00:56:09.023367 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-08 00:56:09.023373 | orchestrator | 2026-03-08 00:56:09.023379 | orchestrator | 2026-03-08 00:56:09.023385 | orchestrator | 2026-03-08 00:56:09.023399 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:56:09.023405 | orchestrator | Sunday 08 March 2026 00:56:07 +0000 (0:00:00.276) 0:11:21.405 ********** 2026-03-08 00:56:09.023412 | orchestrator | =============================================================================== 2026-03-08 00:56:09.023418 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 51.28s 2026-03-08 00:56:09.023424 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 41.33s 2026-03-08 00:56:09.023430 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.25s 2026-03-08 00:56:09.023435 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.13s 2026-03-08 00:56:09.023440 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.83s 2026-03-08 00:56:09.023446 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.38s 2026-03-08 00:56:09.023451 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.50s 2026-03-08 00:56:09.023468 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.53s 2026-03-08 00:56:09.023473 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.09s 2026-03-08 00:56:09.023479 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.85s 2026-03-08 00:56:09.023484 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.27s 2026-03-08 00:56:09.023490 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.25s 2026-03-08 00:56:09.023495 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.44s 2026-03-08 00:56:09.023500 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.39s 2026-03-08 00:56:09.023506 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.34s 2026-03-08 00:56:09.023511 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 4.20s 2026-03-08 00:56:09.023518 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 4.04s 2026-03-08 00:56:09.023525 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.96s 2026-03-08 00:56:09.023531 | orchestrator | ceph-config : Generate Ceph file ---------------------------------------- 3.81s 2026-03-08 00:56:09.023536 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.78s 2026-03-08 00:56:09.023542 | orchestrator | 2026-03-08 00:56:09 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:56:09.023548 | orchestrator | 2026-03-08 00:56:09 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:56:09.023554 | orchestrator | 2026-03-08 00:56:09 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:12.062751 | orchestrator | 2026-03-08 00:56:12 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:56:12.065242 | orchestrator | 2026-03-08 00:56:12 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:56:12.066598 | orchestrator | 2026-03-08 00:56:12 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:56:12.066650 | orchestrator | 2026-03-08 00:56:12 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:15.106612 | orchestrator | 2026-03-08 00:56:15 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:56:15.109027 | orchestrator | 2026-03-08 00:56:15 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:56:15.110409 | orchestrator | 2026-03-08 00:56:15 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:56:15.110450 | orchestrator | 2026-03-08 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:18.165917 | orchestrator | 2026-03-08 00:56:18 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:56:18.168902 | orchestrator | 2026-03-08 00:56:18 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:56:18.169958 | orchestrator | 2026-03-08 00:56:18 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:56:18.170009 | orchestrator | 2026-03-08 00:56:18 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:21.209926 | orchestrator | 2026-03-08 00:56:21 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:56:21.211001 | orchestrator | 2026-03-08 00:56:21 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:56:21.212087 | orchestrator | 2026-03-08 00:56:21 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:56:21.212128 | orchestrator | 2026-03-08 00:56:21 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:24.259269 | orchestrator | 2026-03-08 00:56:24 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:56:24.260308 | orchestrator | 2026-03-08 00:56:24 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:56:24.262223 | orchestrator | 2026-03-08 00:56:24 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:56:24.262383 | orchestrator | 2026-03-08 00:56:24 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:27.308493 | orchestrator | 2026-03-08 00:56:27 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:56:27.309833 | orchestrator | 2026-03-08 00:56:27 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:56:27.311251 | orchestrator | 2026-03-08 00:56:27 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:56:27.311536 | orchestrator | 2026-03-08 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:30.350277 | orchestrator | 2026-03-08 00:56:30 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:56:30.352629 | orchestrator | 2026-03-08 00:56:30 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:56:30.356193 | orchestrator | 2026-03-08 00:56:30 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:56:30.356262 | orchestrator | 2026-03-08 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:33.409836 | orchestrator | 2026-03-08 00:56:33 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:56:33.409912 | orchestrator | 2026-03-08 00:56:33 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:56:33.409917 | orchestrator | 2026-03-08 00:56:33 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:56:33.409922 | orchestrator | 2026-03-08 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:36.456496 | orchestrator | 2026-03-08 00:56:36 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:56:36.458729 | orchestrator | 2026-03-08 00:56:36 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:56:36.461637 | orchestrator | 2026-03-08 00:56:36 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:56:36.461741 | orchestrator | 2026-03-08 00:56:36 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:39.503009 | orchestrator | 2026-03-08 00:56:39 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:56:39.505547 | orchestrator | 2026-03-08 00:56:39 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:56:39.505613 | orchestrator | 2026-03-08 00:56:39 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:56:39.505624 | orchestrator | 2026-03-08 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:42.561675 | orchestrator | 2026-03-08 00:56:42 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:56:42.563386 | orchestrator | 2026-03-08 00:56:42 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:56:42.565415 | orchestrator | 2026-03-08 00:56:42 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:56:42.565484 | orchestrator | 2026-03-08 00:56:42 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:45.618984 | orchestrator | 2026-03-08 00:56:45 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:56:45.619365 | orchestrator | 2026-03-08 00:56:45 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:56:45.621283 | orchestrator | 2026-03-08 00:56:45 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:56:45.621341 | orchestrator | 2026-03-08 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:48.663652 | orchestrator | 2026-03-08 00:56:48 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:56:48.666108 | orchestrator | 2026-03-08 00:56:48 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:56:48.668491 | orchestrator | 2026-03-08 00:56:48 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:56:48.668537 | orchestrator | 2026-03-08 00:56:48 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:51.717453 | orchestrator | 2026-03-08 00:56:51 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:56:51.718570 | orchestrator | 2026-03-08 00:56:51 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:56:51.720551 | orchestrator | 2026-03-08 00:56:51 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:56:51.720616 | orchestrator | 2026-03-08 00:56:51 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:54.772644 | orchestrator | 2026-03-08 00:56:54 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:56:54.775825 | orchestrator | 2026-03-08 00:56:54 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:56:54.776930 | orchestrator | 2026-03-08 00:56:54 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:56:54.776979 | orchestrator | 2026-03-08 00:56:54 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:57.817988 | orchestrator | 2026-03-08 00:56:57 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:56:57.819816 | orchestrator | 2026-03-08 00:56:57 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state STARTED 2026-03-08 00:56:57.822138 | orchestrator | 2026-03-08 00:56:57 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:56:57.822578 | orchestrator | 2026-03-08 00:56:57 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:00.870235 | orchestrator | 2026-03-08 00:57:00 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:57:00.875445 | orchestrator | 2026-03-08 00:57:00 | INFO  | Task 5336c1a0-3e66-4892-b314-928678c6142b is in state SUCCESS 2026-03-08 00:57:00.877053 | orchestrator | 2026-03-08 00:57:00.877105 | orchestrator | 2026-03-08 00:57:00.877111 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-08 00:57:00.877116 | orchestrator | 2026-03-08 00:57:00.877121 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-08 00:57:00.877125 | orchestrator | Sunday 08 March 2026 00:53:56 +0000 (0:00:00.097) 0:00:00.097 ********** 2026-03-08 00:57:00.877130 | orchestrator | ok: [localhost] => { 2026-03-08 00:57:00.877135 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-08 00:57:00.877139 | orchestrator | } 2026-03-08 00:57:00.877144 | orchestrator | 2026-03-08 00:57:00.877148 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-08 00:57:00.877152 | orchestrator | Sunday 08 March 2026 00:53:56 +0000 (0:00:00.051) 0:00:00.148 ********** 2026-03-08 00:57:00.877156 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-08 00:57:00.877178 | orchestrator | ...ignoring 2026-03-08 00:57:00.877183 | orchestrator | 2026-03-08 00:57:00.877187 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-08 00:57:00.877190 | orchestrator | Sunday 08 March 2026 00:53:59 +0000 (0:00:02.899) 0:00:03.048 ********** 2026-03-08 00:57:00.877194 | orchestrator | skipping: [localhost] 2026-03-08 00:57:00.877198 | orchestrator | 2026-03-08 00:57:00.877202 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-08 00:57:00.877205 | orchestrator | Sunday 08 March 2026 00:53:59 +0000 (0:00:00.061) 0:00:03.110 ********** 2026-03-08 00:57:00.877209 | orchestrator | ok: [localhost] 2026-03-08 00:57:00.877213 | orchestrator | 2026-03-08 00:57:00.877217 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 00:57:00.877220 | orchestrator | 2026-03-08 00:57:00.877224 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 00:57:00.877228 | orchestrator | Sunday 08 March 2026 00:53:59 +0000 (0:00:00.189) 0:00:03.299 ********** 2026-03-08 00:57:00.877231 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:57:00.877235 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:57:00.877239 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:57:00.877243 | orchestrator | 2026-03-08 00:57:00.877246 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 00:57:00.877261 | orchestrator | Sunday 08 March 2026 00:53:59 +0000 (0:00:00.351) 0:00:03.651 ********** 2026-03-08 00:57:00.877265 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-08 00:57:00.877269 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-08 00:57:00.877273 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-08 00:57:00.877277 | orchestrator | 2026-03-08 00:57:00.877281 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-08 00:57:00.877284 | orchestrator | 2026-03-08 00:57:00.877288 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-08 00:57:00.877292 | orchestrator | Sunday 08 March 2026 00:54:00 +0000 (0:00:00.609) 0:00:04.261 ********** 2026-03-08 00:57:00.877296 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-08 00:57:00.877299 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-08 00:57:00.877303 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-08 00:57:00.877307 | orchestrator | 2026-03-08 00:57:00.877311 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-08 00:57:00.877315 | orchestrator | Sunday 08 March 2026 00:54:00 +0000 (0:00:00.417) 0:00:04.678 ********** 2026-03-08 00:57:00.877319 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:57:00.877323 | orchestrator | 2026-03-08 00:57:00.877327 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-08 00:57:00.877331 | orchestrator | Sunday 08 March 2026 00:54:01 +0000 (0:00:00.545) 0:00:05.224 ********** 2026-03-08 00:57:00.877350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-08 00:57:00.877364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-08 00:57:00.877369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-08 00:57:00.877377 | orchestrator | 2026-03-08 00:57:00.877383 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-08 00:57:00.877387 | orchestrator | Sunday 08 March 2026 00:54:04 +0000 (0:00:02.928) 0:00:08.153 ********** 2026-03-08 00:57:00.877391 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:57:00.877395 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:57:00.877398 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:57:00.877402 | orchestrator | 2026-03-08 00:57:00.877406 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-08 00:57:00.877410 | orchestrator | Sunday 08 March 2026 00:54:05 +0000 (0:00:00.940) 0:00:09.094 ********** 2026-03-08 00:57:00.877413 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:57:00.877417 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:57:00.877421 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:57:00.877425 | orchestrator | 2026-03-08 00:57:00.877428 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-08 00:57:00.877432 | orchestrator | Sunday 08 March 2026 00:54:07 +0000 (0:00:01.684) 0:00:10.779 ********** 2026-03-08 00:57:00.877439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-08 00:57:00.877447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-08 00:57:00.877458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-08 00:57:00.877462 | orchestrator | 2026-03-08 00:57:00.877466 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-08 00:57:00.877470 | orchestrator | Sunday 08 March 2026 00:54:11 +0000 (0:00:04.321) 0:00:15.100 ********** 2026-03-08 00:57:00.877474 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:57:00.877478 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:57:00.877481 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:57:00.877485 | orchestrator | 2026-03-08 00:57:00.877489 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-08 00:57:00.877493 | orchestrator | Sunday 08 March 2026 00:54:12 +0000 (0:00:01.397) 0:00:16.498 ********** 2026-03-08 00:57:00.877496 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:57:00.877500 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:57:00.877657 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:57:00.877671 | orchestrator | 2026-03-08 00:57:00.877677 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-08 00:57:00.877681 | orchestrator | Sunday 08 March 2026 00:54:17 +0000 (0:00:04.902) 0:00:21.400 ********** 2026-03-08 00:57:00.877685 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:57:00.877689 | orchestrator | 2026-03-08 00:57:00.877693 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-08 00:57:00.877697 | orchestrator | Sunday 08 March 2026 00:54:18 +0000 (0:00:00.548) 0:00:21.949 ********** 2026-03-08 00:57:00.877707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:57:00.877792 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:57:00.877800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:57:00.877809 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:57:00.877819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:57:00.877823 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:57:00.877827 | orchestrator | 2026-03-08 00:57:00.877831 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-08 00:57:00.877835 | orchestrator | Sunday 08 March 2026 00:54:22 +0000 (0:00:03.856) 0:00:25.805 ********** 2026-03-08 00:57:00.877842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:57:00.877853 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:57:00.877861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:57:00.877865 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:57:00.877871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:57:00.877882 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:57:00.877885 | orchestrator | 2026-03-08 00:57:00.877889 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-08 00:57:00.877893 | orchestrator | Sunday 08 March 2026 00:54:25 +0000 (0:00:03.737) 0:00:29.543 ********** 2026-03-08 00:57:00.877900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:57:00.877904 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:57:00.877910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:57:00.877920 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:57:00.877925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:57:00.877929 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:57:00.877933 | orchestrator | 2026-03-08 00:57:00.877936 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-08 00:57:00.877940 | orchestrator | Sunday 08 March 2026 00:54:29 +0000 (0:00:03.290) 0:00:32.834 ********** 2026-03-08 00:57:00.877950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-08 00:57:00.877958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-08 00:57:00.877966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-08 00:57:00.877971 | orchestrator | 2026-03-08 00:57:00.877974 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-08 00:57:00.877978 | orchestrator | Sunday 08 March 2026 00:54:32 +0000 (0:00:03.428) 0:00:36.262 ********** 2026-03-08 00:57:00.877988 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:57:00.877992 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:57:00.877995 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:57:00.877999 | orchestrator | 2026-03-08 00:57:00.878003 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-08 00:57:00.878007 | orchestrator | Sunday 08 March 2026 00:54:33 +0000 (0:00:00.812) 0:00:37.075 ********** 2026-03-08 00:57:00.878011 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:57:00.878137 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:57:00.878142 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:57:00.878146 | orchestrator | 2026-03-08 00:57:00.878150 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-08 00:57:00.878154 | orchestrator | Sunday 08 March 2026 00:54:34 +0000 (0:00:00.860) 0:00:37.936 ********** 2026-03-08 00:57:00.878157 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:57:00.878161 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:57:00.878165 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:57:00.878169 | orchestrator | 2026-03-08 00:57:00.878172 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-08 00:57:00.878176 | orchestrator | Sunday 08 March 2026 00:54:34 +0000 (0:00:00.490) 0:00:38.426 ********** 2026-03-08 00:57:00.878181 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-08 00:57:00.878186 | orchestrator | ...ignoring 2026-03-08 00:57:00.878191 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-08 00:57:00.878194 | orchestrator | ...ignoring 2026-03-08 00:57:00.878199 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-08 00:57:00.878202 | orchestrator | ...ignoring 2026-03-08 00:57:00.878206 | orchestrator | 2026-03-08 00:57:00.878210 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-08 00:57:00.878214 | orchestrator | Sunday 08 March 2026 00:54:45 +0000 (0:00:10.890) 0:00:49.317 ********** 2026-03-08 00:57:00.878218 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:57:00.878221 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:57:00.878225 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:57:00.878229 | orchestrator | 2026-03-08 00:57:00.878233 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-08 00:57:00.878237 | orchestrator | Sunday 08 March 2026 00:54:45 +0000 (0:00:00.424) 0:00:49.741 ********** 2026-03-08 00:57:00.878240 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:57:00.878244 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:57:00.878248 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:57:00.878252 | orchestrator | 2026-03-08 00:57:00.878256 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-08 00:57:00.878260 | orchestrator | Sunday 08 March 2026 00:54:46 +0000 (0:00:00.681) 0:00:50.423 ********** 2026-03-08 00:57:00.878264 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:57:00.878270 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:57:00.878276 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:57:00.878283 | orchestrator | 2026-03-08 00:57:00.878291 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-08 00:57:00.878298 | orchestrator | Sunday 08 March 2026 00:54:47 +0000 (0:00:00.453) 0:00:50.877 ********** 2026-03-08 00:57:00.878304 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:57:00.878309 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:57:00.878315 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:57:00.878321 | orchestrator | 2026-03-08 00:57:00.878326 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-08 00:57:00.878337 | orchestrator | Sunday 08 March 2026 00:54:47 +0000 (0:00:00.418) 0:00:51.295 ********** 2026-03-08 00:57:00.878350 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:57:00.878357 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:57:00.878363 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:57:00.878369 | orchestrator | 2026-03-08 00:57:00.878372 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-08 00:57:00.878377 | orchestrator | Sunday 08 March 2026 00:54:47 +0000 (0:00:00.426) 0:00:51.721 ********** 2026-03-08 00:57:00.878380 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:57:00.878384 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:57:00.878388 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:57:00.878392 | orchestrator | 2026-03-08 00:57:00.878396 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-08 00:57:00.878402 | orchestrator | Sunday 08 March 2026 00:54:48 +0000 (0:00:00.701) 0:00:52.423 ********** 2026-03-08 00:57:00.878408 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:57:00.878414 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:57:00.878420 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-08 00:57:00.878425 | orchestrator | 2026-03-08 00:57:00.878431 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-08 00:57:00.878437 | orchestrator | Sunday 08 March 2026 00:54:49 +0000 (0:00:00.411) 0:00:52.834 ********** 2026-03-08 00:57:00.878443 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:57:00.878449 | orchestrator | 2026-03-08 00:57:00.878455 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-08 00:57:00.878462 | orchestrator | Sunday 08 March 2026 00:55:00 +0000 (0:00:11.368) 0:01:04.202 ********** 2026-03-08 00:57:00.878467 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:57:00.878471 | orchestrator | 2026-03-08 00:57:00.878475 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-08 00:57:00.878479 | orchestrator | Sunday 08 March 2026 00:55:00 +0000 (0:00:00.133) 0:01:04.335 ********** 2026-03-08 00:57:00.878482 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:57:00.878486 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:57:00.878490 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:57:00.878494 | orchestrator | 2026-03-08 00:57:00.878498 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-08 00:57:00.878506 | orchestrator | Sunday 08 March 2026 00:55:01 +0000 (0:00:00.997) 0:01:05.333 ********** 2026-03-08 00:57:00.878509 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:57:00.878513 | orchestrator | 2026-03-08 00:57:00.878517 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-08 00:57:00.878521 | orchestrator | Sunday 08 March 2026 00:55:09 +0000 (0:00:08.051) 0:01:13.385 ********** 2026-03-08 00:57:00.878524 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:57:00.878528 | orchestrator | 2026-03-08 00:57:00.878532 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-08 00:57:00.878536 | orchestrator | Sunday 08 March 2026 00:55:12 +0000 (0:00:02.583) 0:01:15.969 ********** 2026-03-08 00:57:00.878539 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:57:00.878543 | orchestrator | 2026-03-08 00:57:00.878547 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-08 00:57:00.878551 | orchestrator | Sunday 08 March 2026 00:55:14 +0000 (0:00:02.620) 0:01:18.589 ********** 2026-03-08 00:57:00.878554 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:57:00.878558 | orchestrator | 2026-03-08 00:57:00.878562 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-08 00:57:00.878565 | orchestrator | Sunday 08 March 2026 00:55:14 +0000 (0:00:00.110) 0:01:18.700 ********** 2026-03-08 00:57:00.878569 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:57:00.878573 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:57:00.878577 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:57:00.878580 | orchestrator | 2026-03-08 00:57:00.878584 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-08 00:57:00.878592 | orchestrator | Sunday 08 March 2026 00:55:15 +0000 (0:00:00.320) 0:01:19.021 ********** 2026-03-08 00:57:00.878596 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:57:00.878599 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-08 00:57:00.878603 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:57:00.878607 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:57:00.878611 | orchestrator | 2026-03-08 00:57:00.878614 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-08 00:57:00.878618 | orchestrator | skipping: no hosts matched 2026-03-08 00:57:00.878622 | orchestrator | 2026-03-08 00:57:00.878625 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-08 00:57:00.878629 | orchestrator | 2026-03-08 00:57:00.878633 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-08 00:57:00.878637 | orchestrator | Sunday 08 March 2026 00:55:15 +0000 (0:00:00.572) 0:01:19.594 ********** 2026-03-08 00:57:00.878640 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:57:00.878644 | orchestrator | 2026-03-08 00:57:00.878648 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-08 00:57:00.878651 | orchestrator | Sunday 08 March 2026 00:55:31 +0000 (0:00:15.646) 0:01:35.241 ********** 2026-03-08 00:57:00.878655 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:57:00.878659 | orchestrator | 2026-03-08 00:57:00.878663 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-08 00:57:00.878666 | orchestrator | Sunday 08 March 2026 00:55:47 +0000 (0:00:15.563) 0:01:50.804 ********** 2026-03-08 00:57:00.878670 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:57:00.878674 | orchestrator | 2026-03-08 00:57:00.878677 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-08 00:57:00.878681 | orchestrator | 2026-03-08 00:57:00.878685 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-08 00:57:00.878689 | orchestrator | Sunday 08 March 2026 00:55:49 +0000 (0:00:02.512) 0:01:53.317 ********** 2026-03-08 00:57:00.878692 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:57:00.878696 | orchestrator | 2026-03-08 00:57:00.878700 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-08 00:57:00.878707 | orchestrator | Sunday 08 March 2026 00:56:07 +0000 (0:00:17.546) 0:02:10.863 ********** 2026-03-08 00:57:00.878733 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:57:00.878737 | orchestrator | 2026-03-08 00:57:00.878741 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-08 00:57:00.878745 | orchestrator | Sunday 08 March 2026 00:56:23 +0000 (0:00:16.595) 0:02:27.459 ********** 2026-03-08 00:57:00.878749 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:57:00.878752 | orchestrator | 2026-03-08 00:57:00.878756 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-08 00:57:00.878761 | orchestrator | 2026-03-08 00:57:00.878765 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-08 00:57:00.878769 | orchestrator | Sunday 08 March 2026 00:56:26 +0000 (0:00:02.529) 0:02:29.988 ********** 2026-03-08 00:57:00.878774 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:57:00.878779 | orchestrator | 2026-03-08 00:57:00.878783 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-08 00:57:00.878788 | orchestrator | Sunday 08 March 2026 00:56:43 +0000 (0:00:17.114) 0:02:47.102 ********** 2026-03-08 00:57:00.878792 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:57:00.878796 | orchestrator | 2026-03-08 00:57:00.878801 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-08 00:57:00.878806 | orchestrator | Sunday 08 March 2026 00:56:43 +0000 (0:00:00.546) 0:02:47.649 ********** 2026-03-08 00:57:00.878810 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:57:00.878816 | orchestrator | 2026-03-08 00:57:00.878822 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-08 00:57:00.878828 | orchestrator | 2026-03-08 00:57:00.878834 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-08 00:57:00.878844 | orchestrator | Sunday 08 March 2026 00:56:46 +0000 (0:00:02.893) 0:02:50.542 ********** 2026-03-08 00:57:00.878850 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:57:00.878856 | orchestrator | 2026-03-08 00:57:00.878862 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-08 00:57:00.878868 | orchestrator | Sunday 08 March 2026 00:56:47 +0000 (0:00:00.609) 0:02:51.152 ********** 2026-03-08 00:57:00.878874 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:57:00.878881 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:57:00.878885 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:57:00.878889 | orchestrator | 2026-03-08 00:57:00.878896 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-08 00:57:00.878900 | orchestrator | Sunday 08 March 2026 00:56:49 +0000 (0:00:02.404) 0:02:53.557 ********** 2026-03-08 00:57:00.878904 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:57:00.878907 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:57:00.878911 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:57:00.878915 | orchestrator | 2026-03-08 00:57:00.878919 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-08 00:57:00.878922 | orchestrator | Sunday 08 March 2026 00:56:51 +0000 (0:00:02.122) 0:02:55.679 ********** 2026-03-08 00:57:00.878926 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:57:00.878930 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:57:00.878933 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:57:00.878937 | orchestrator | 2026-03-08 00:57:00.878941 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-08 00:57:00.878945 | orchestrator | Sunday 08 March 2026 00:56:54 +0000 (0:00:02.209) 0:02:57.889 ********** 2026-03-08 00:57:00.878948 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:57:00.878952 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:57:00.878955 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:57:00.878959 | orchestrator | 2026-03-08 00:57:00.878963 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-08 00:57:00.878967 | orchestrator | Sunday 08 March 2026 00:56:56 +0000 (0:00:02.297) 0:03:00.186 ********** 2026-03-08 00:57:00.878970 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:57:00.878974 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:57:00.878978 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:57:00.878981 | orchestrator | 2026-03-08 00:57:00.878985 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-08 00:57:00.878989 | orchestrator | Sunday 08 March 2026 00:56:59 +0000 (0:00:03.192) 0:03:03.378 ********** 2026-03-08 00:57:00.878993 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:57:00.878996 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:57:00.879000 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:57:00.879004 | orchestrator | 2026-03-08 00:57:00.879007 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:57:00.879011 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-08 00:57:00.879015 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-08 00:57:00.879023 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-08 00:57:00.879029 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-08 00:57:00.879035 | orchestrator | 2026-03-08 00:57:00.879041 | orchestrator | 2026-03-08 00:57:00.879047 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:57:00.879058 | orchestrator | Sunday 08 March 2026 00:56:59 +0000 (0:00:00.241) 0:03:03.620 ********** 2026-03-08 00:57:00.879064 | orchestrator | =============================================================================== 2026-03-08 00:57:00.879070 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 33.19s 2026-03-08 00:57:00.879077 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 32.16s 2026-03-08 00:57:00.879088 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 17.11s 2026-03-08 00:57:00.879092 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 11.37s 2026-03-08 00:57:00.879096 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.89s 2026-03-08 00:57:00.879100 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.05s 2026-03-08 00:57:00.879104 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.04s 2026-03-08 00:57:00.879107 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.90s 2026-03-08 00:57:00.879111 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.32s 2026-03-08 00:57:00.879115 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.86s 2026-03-08 00:57:00.879118 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.74s 2026-03-08 00:57:00.879122 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.43s 2026-03-08 00:57:00.879126 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.29s 2026-03-08 00:57:00.879129 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.19s 2026-03-08 00:57:00.879133 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.93s 2026-03-08 00:57:00.879139 | orchestrator | Check MariaDB service --------------------------------------------------- 2.90s 2026-03-08 00:57:00.879144 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.89s 2026-03-08 00:57:00.879150 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.62s 2026-03-08 00:57:00.879156 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 2.58s 2026-03-08 00:57:00.879162 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.40s 2026-03-08 00:57:00.879168 | orchestrator | 2026-03-08 00:57:00 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:57:00.879182 | orchestrator | 2026-03-08 00:57:00 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:03.934220 | orchestrator | 2026-03-08 00:57:03 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:57:03.936378 | orchestrator | 2026-03-08 00:57:03 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:57:03.938425 | orchestrator | 2026-03-08 00:57:03 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state STARTED 2026-03-08 00:57:03.940590 | orchestrator | 2026-03-08 00:57:03 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:57:03.940914 | orchestrator | 2026-03-08 00:57:03 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:06.992190 | orchestrator | 2026-03-08 00:57:06 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:57:06.992281 | orchestrator | 2026-03-08 00:57:06 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:57:06.994875 | orchestrator | 2026-03-08 00:57:06 | INFO  | Task 1efc8df6-5acd-4042-bc8f-0b021379fb6e is in state SUCCESS 2026-03-08 00:57:06.996320 | orchestrator | 2026-03-08 00:57:06.996376 | orchestrator | 2026-03-08 00:57:06.996386 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 00:57:06.996395 | orchestrator | 2026-03-08 00:57:06.996401 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 00:57:06.996442 | orchestrator | Sunday 08 March 2026 00:53:56 +0000 (0:00:00.274) 0:00:00.274 ********** 2026-03-08 00:57:06.996447 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:57:06.996452 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:57:06.996456 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:57:06.996459 | orchestrator | 2026-03-08 00:57:06.996463 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 00:57:06.996467 | orchestrator | Sunday 08 March 2026 00:53:56 +0000 (0:00:00.301) 0:00:00.576 ********** 2026-03-08 00:57:06.996472 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-08 00:57:06.996476 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-08 00:57:06.996480 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-08 00:57:06.996484 | orchestrator | 2026-03-08 00:57:06.996488 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-08 00:57:06.996492 | orchestrator | 2026-03-08 00:57:06.996495 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-08 00:57:06.996500 | orchestrator | Sunday 08 March 2026 00:53:57 +0000 (0:00:00.462) 0:00:01.039 ********** 2026-03-08 00:57:06.996504 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:57:06.996508 | orchestrator | 2026-03-08 00:57:06.996512 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-08 00:57:06.996516 | orchestrator | Sunday 08 March 2026 00:53:57 +0000 (0:00:00.515) 0:00:01.554 ********** 2026-03-08 00:57:06.996520 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-08 00:57:06.996524 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-08 00:57:06.996527 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-08 00:57:06.996531 | orchestrator | 2026-03-08 00:57:06.996535 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-08 00:57:06.996539 | orchestrator | Sunday 08 March 2026 00:53:58 +0000 (0:00:00.698) 0:00:02.253 ********** 2026-03-08 00:57:06.996545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:57:06.996563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:57:06.996577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:57:06.996588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:57:06.996593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:57:06.996600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:57:06.996609 | orchestrator | 2026-03-08 00:57:06.996613 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-08 00:57:06.996617 | orchestrator | Sunday 08 March 2026 00:54:00 +0000 (0:00:01.719) 0:00:03.973 ********** 2026-03-08 00:57:06.996621 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:57:06.996625 | orchestrator | 2026-03-08 00:57:06.996629 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-08 00:57:06.996632 | orchestrator | Sunday 08 March 2026 00:54:00 +0000 (0:00:00.572) 0:00:04.545 ********** 2026-03-08 00:57:06.996641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:57:06.996645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:57:06.996649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:57:06.996656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:57:06.996668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:57:06.996673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:57:06.996677 | orchestrator | 2026-03-08 00:57:06.996681 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-08 00:57:06.996688 | orchestrator | Sunday 08 March 2026 00:54:03 +0000 (0:00:02.469) 0:00:07.015 ********** 2026-03-08 00:57:06.996695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-08 00:57:06.996822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-08 00:57:06.996836 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:57:06.996850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-08 00:57:06.996857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-08 00:57:06.996863 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:57:06.996869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-08 00:57:06.996879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-08 00:57:06.996890 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:57:06.996896 | orchestrator | 2026-03-08 00:57:06.996902 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-08 00:57:06.996908 | orchestrator | Sunday 08 March 2026 00:54:04 +0000 (0:00:01.324) 0:00:08.339 ********** 2026-03-08 00:57:06.996919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-08 00:57:06.996926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-08 00:57:06.996932 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:57:06.996936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-08 00:57:06.996943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-08 00:57:06.996953 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:57:06.996965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-08 00:57:06.996976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-08 00:57:06.996982 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:57:06.996988 | orchestrator | 2026-03-08 00:57:06.996994 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-08 00:57:06.997000 | orchestrator | Sunday 08 March 2026 00:54:05 +0000 (0:00:00.944) 0:00:09.284 ********** 2026-03-08 00:57:06.997007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:57:06.997019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:57:06.997033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:57:06.997044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:57:06.997050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:57:06.997057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:57:06.997067 | orchestrator | 2026-03-08 00:57:06.997072 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-08 00:57:06.997081 | orchestrator | Sunday 08 March 2026 00:54:08 +0000 (0:00:02.840) 0:00:12.125 ********** 2026-03-08 00:57:06.997086 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:57:06.997092 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:57:06.997097 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:57:06.997103 | orchestrator | 2026-03-08 00:57:06.997109 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-08 00:57:06.997115 | orchestrator | Sunday 08 March 2026 00:54:12 +0000 (0:00:03.995) 0:00:16.120 ********** 2026-03-08 00:57:06.997121 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:57:06.997127 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:57:06.997134 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:57:06.997140 | orchestrator | 2026-03-08 00:57:06.997145 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-08 00:57:06.997149 | orchestrator | Sunday 08 March 2026 00:54:14 +0000 (0:00:02.346) 0:00:18.467 ********** 2026-03-08 00:57:06.997159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:57:06.997164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:57:06.997168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:57:06.997176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:57:06.997184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:57:06.997188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:57:06.997192 | orchestrator | 2026-03-08 00:57:06.997196 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-08 00:57:06.997203 | orchestrator | Sunday 08 March 2026 00:54:16 +0000 (0:00:02.217) 0:00:20.684 ********** 2026-03-08 00:57:06.997207 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:57:06.997211 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:57:06.997215 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:57:06.997218 | orchestrator | 2026-03-08 00:57:06.997222 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-08 00:57:06.997226 | orchestrator | Sunday 08 March 2026 00:54:17 +0000 (0:00:00.276) 0:00:20.960 ********** 2026-03-08 00:57:06.997230 | orchestrator | 2026-03-08 00:57:06.997234 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-08 00:57:06.997284 | orchestrator | Sunday 08 March 2026 00:54:17 +0000 (0:00:00.074) 0:00:21.034 ********** 2026-03-08 00:57:06.997296 | orchestrator | 2026-03-08 00:57:06.997300 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-08 00:57:06.997303 | orchestrator | Sunday 08 March 2026 00:54:17 +0000 (0:00:00.066) 0:00:21.101 ********** 2026-03-08 00:57:06.997307 | orchestrator | 2026-03-08 00:57:06.997311 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-08 00:57:06.997314 | orchestrator | Sunday 08 March 2026 00:54:17 +0000 (0:00:00.067) 0:00:21.168 ********** 2026-03-08 00:57:06.997318 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:57:06.997322 | orchestrator | 2026-03-08 00:57:06.997326 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-08 00:57:06.997330 | orchestrator | Sunday 08 March 2026 00:54:18 +0000 (0:00:00.701) 0:00:21.869 ********** 2026-03-08 00:57:06.997337 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:57:06.997341 | orchestrator | 2026-03-08 00:57:06.997345 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-08 00:57:06.997349 | orchestrator | Sunday 08 March 2026 00:54:18 +0000 (0:00:00.221) 0:00:22.090 ********** 2026-03-08 00:57:06.997352 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:57:06.997356 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:57:06.997360 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:57:06.997364 | orchestrator | 2026-03-08 00:57:06.997367 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-08 00:57:06.997371 | orchestrator | Sunday 08 March 2026 00:55:28 +0000 (0:01:10.584) 0:01:32.675 ********** 2026-03-08 00:57:06.997375 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:57:06.997378 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:57:06.997382 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:57:06.997386 | orchestrator | 2026-03-08 00:57:06.997390 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-08 00:57:06.997396 | orchestrator | Sunday 08 March 2026 00:56:54 +0000 (0:01:25.487) 0:02:58.162 ********** 2026-03-08 00:57:06.997400 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:57:06.997404 | orchestrator | 2026-03-08 00:57:06.997408 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-08 00:57:06.997412 | orchestrator | Sunday 08 March 2026 00:56:55 +0000 (0:00:00.844) 0:02:59.006 ********** 2026-03-08 00:57:06.997415 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:57:06.997419 | orchestrator | 2026-03-08 00:57:06.997423 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-08 00:57:06.997427 | orchestrator | Sunday 08 March 2026 00:56:57 +0000 (0:00:02.464) 0:03:01.471 ********** 2026-03-08 00:57:06.997430 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:57:06.997434 | orchestrator | 2026-03-08 00:57:06.997438 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-08 00:57:06.997442 | orchestrator | Sunday 08 March 2026 00:56:59 +0000 (0:00:02.334) 0:03:03.805 ********** 2026-03-08 00:57:06.997445 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:57:06.997449 | orchestrator | 2026-03-08 00:57:06.997453 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-08 00:57:06.997463 | orchestrator | Sunday 08 March 2026 00:57:02 +0000 (0:00:02.726) 0:03:06.531 ********** 2026-03-08 00:57:06.997467 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:57:06.997470 | orchestrator | 2026-03-08 00:57:06.997477 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:57:06.997482 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-08 00:57:06.997487 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-08 00:57:06.997491 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-08 00:57:06.997495 | orchestrator | 2026-03-08 00:57:06.997499 | orchestrator | 2026-03-08 00:57:06.997503 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:57:06.997506 | orchestrator | Sunday 08 March 2026 00:57:05 +0000 (0:00:02.499) 0:03:09.030 ********** 2026-03-08 00:57:06.997510 | orchestrator | =============================================================================== 2026-03-08 00:57:06.997514 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 85.49s 2026-03-08 00:57:06.997518 | orchestrator | opensearch : Restart opensearch container ------------------------------ 70.58s 2026-03-08 00:57:06.997521 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 4.00s 2026-03-08 00:57:06.997525 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.84s 2026-03-08 00:57:06.997529 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.73s 2026-03-08 00:57:06.997532 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.50s 2026-03-08 00:57:06.997536 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.47s 2026-03-08 00:57:06.997540 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.46s 2026-03-08 00:57:06.997543 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.35s 2026-03-08 00:57:06.997547 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.33s 2026-03-08 00:57:06.997551 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.22s 2026-03-08 00:57:06.997554 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.72s 2026-03-08 00:57:06.997558 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.32s 2026-03-08 00:57:06.997562 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.94s 2026-03-08 00:57:06.997566 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.84s 2026-03-08 00:57:06.997569 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.70s 2026-03-08 00:57:06.997573 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.70s 2026-03-08 00:57:06.997577 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2026-03-08 00:57:06.997580 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2026-03-08 00:57:06.997584 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2026-03-08 00:57:07.001599 | orchestrator | 2026-03-08 00:57:07 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:57:07.001649 | orchestrator | 2026-03-08 00:57:07 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:10.040960 | orchestrator | 2026-03-08 00:57:10 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:57:10.041665 | orchestrator | 2026-03-08 00:57:10 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:57:10.042593 | orchestrator | 2026-03-08 00:57:10 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:57:10.042643 | orchestrator | 2026-03-08 00:57:10 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:13.098532 | orchestrator | 2026-03-08 00:57:13 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:57:13.099460 | orchestrator | 2026-03-08 00:57:13 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:57:13.101622 | orchestrator | 2026-03-08 00:57:13 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:57:13.101774 | orchestrator | 2026-03-08 00:57:13 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:16.137491 | orchestrator | 2026-03-08 00:57:16 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:57:16.138735 | orchestrator | 2026-03-08 00:57:16 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:57:16.139934 | orchestrator | 2026-03-08 00:57:16 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:57:16.139959 | orchestrator | 2026-03-08 00:57:16 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:19.183386 | orchestrator | 2026-03-08 00:57:19 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:57:19.183456 | orchestrator | 2026-03-08 00:57:19 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:57:19.186884 | orchestrator | 2026-03-08 00:57:19 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:57:19.186955 | orchestrator | 2026-03-08 00:57:19 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:22.218420 | orchestrator | 2026-03-08 00:57:22 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:57:22.218517 | orchestrator | 2026-03-08 00:57:22 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:57:22.219434 | orchestrator | 2026-03-08 00:57:22 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:57:22.219457 | orchestrator | 2026-03-08 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:25.265766 | orchestrator | 2026-03-08 00:57:25 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:57:25.273899 | orchestrator | 2026-03-08 00:57:25 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:57:25.277102 | orchestrator | 2026-03-08 00:57:25 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:57:25.277186 | orchestrator | 2026-03-08 00:57:25 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:28.318052 | orchestrator | 2026-03-08 00:57:28 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:57:28.319606 | orchestrator | 2026-03-08 00:57:28 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:57:28.320825 | orchestrator | 2026-03-08 00:57:28 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:57:28.320902 | orchestrator | 2026-03-08 00:57:28 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:31.370492 | orchestrator | 2026-03-08 00:57:31 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:57:31.371040 | orchestrator | 2026-03-08 00:57:31 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:57:31.373241 | orchestrator | 2026-03-08 00:57:31 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:57:31.373391 | orchestrator | 2026-03-08 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:34.413968 | orchestrator | 2026-03-08 00:57:34 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:57:34.416934 | orchestrator | 2026-03-08 00:57:34 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:57:34.417865 | orchestrator | 2026-03-08 00:57:34 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:57:34.418077 | orchestrator | 2026-03-08 00:57:34 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:37.466382 | orchestrator | 2026-03-08 00:57:37 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:57:37.468579 | orchestrator | 2026-03-08 00:57:37 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:57:37.470599 | orchestrator | 2026-03-08 00:57:37 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:57:37.470657 | orchestrator | 2026-03-08 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:40.518950 | orchestrator | 2026-03-08 00:57:40 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:57:40.520755 | orchestrator | 2026-03-08 00:57:40 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:57:40.522287 | orchestrator | 2026-03-08 00:57:40 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:57:40.522342 | orchestrator | 2026-03-08 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:43.554194 | orchestrator | 2026-03-08 00:57:43 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:57:43.555706 | orchestrator | 2026-03-08 00:57:43 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:57:43.558755 | orchestrator | 2026-03-08 00:57:43 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:57:43.559245 | orchestrator | 2026-03-08 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:46.599826 | orchestrator | 2026-03-08 00:57:46 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:57:46.602674 | orchestrator | 2026-03-08 00:57:46 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:57:46.605380 | orchestrator | 2026-03-08 00:57:46 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:57:46.605431 | orchestrator | 2026-03-08 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:49.657010 | orchestrator | 2026-03-08 00:57:49 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:57:49.660346 | orchestrator | 2026-03-08 00:57:49 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:57:49.661522 | orchestrator | 2026-03-08 00:57:49 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:57:49.661575 | orchestrator | 2026-03-08 00:57:49 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:52.705771 | orchestrator | 2026-03-08 00:57:52 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:57:52.709070 | orchestrator | 2026-03-08 00:57:52 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:57:52.710904 | orchestrator | 2026-03-08 00:57:52 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:57:52.710952 | orchestrator | 2026-03-08 00:57:52 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:55.755288 | orchestrator | 2026-03-08 00:57:55 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:57:55.757161 | orchestrator | 2026-03-08 00:57:55 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:57:55.758422 | orchestrator | 2026-03-08 00:57:55 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:57:55.758456 | orchestrator | 2026-03-08 00:57:55 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:58.806440 | orchestrator | 2026-03-08 00:57:58 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:57:58.809412 | orchestrator | 2026-03-08 00:57:58 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:57:58.813004 | orchestrator | 2026-03-08 00:57:58 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:57:58.813379 | orchestrator | 2026-03-08 00:57:58 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:01.858116 | orchestrator | 2026-03-08 00:58:01 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:58:01.860361 | orchestrator | 2026-03-08 00:58:01 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:58:01.862566 | orchestrator | 2026-03-08 00:58:01 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:58:01.862672 | orchestrator | 2026-03-08 00:58:01 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:04.906069 | orchestrator | 2026-03-08 00:58:04 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:58:04.907320 | orchestrator | 2026-03-08 00:58:04 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:58:04.909121 | orchestrator | 2026-03-08 00:58:04 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:58:04.909151 | orchestrator | 2026-03-08 00:58:04 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:07.959477 | orchestrator | 2026-03-08 00:58:07 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:58:07.960889 | orchestrator | 2026-03-08 00:58:07 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:58:07.962932 | orchestrator | 2026-03-08 00:58:07 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:58:07.962974 | orchestrator | 2026-03-08 00:58:07 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:11.013711 | orchestrator | 2026-03-08 00:58:11 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:58:11.015253 | orchestrator | 2026-03-08 00:58:11 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:58:11.016891 | orchestrator | 2026-03-08 00:58:11 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:58:11.016945 | orchestrator | 2026-03-08 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:14.061785 | orchestrator | 2026-03-08 00:58:14 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:58:14.063914 | orchestrator | 2026-03-08 00:58:14 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:58:14.066299 | orchestrator | 2026-03-08 00:58:14 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:58:14.066338 | orchestrator | 2026-03-08 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:17.116996 | orchestrator | 2026-03-08 00:58:17 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:58:17.118270 | orchestrator | 2026-03-08 00:58:17 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:58:17.120833 | orchestrator | 2026-03-08 00:58:17 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:58:17.121365 | orchestrator | 2026-03-08 00:58:17 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:20.170230 | orchestrator | 2026-03-08 00:58:20 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:58:20.172102 | orchestrator | 2026-03-08 00:58:20 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:58:20.174454 | orchestrator | 2026-03-08 00:58:20 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:58:20.174616 | orchestrator | 2026-03-08 00:58:20 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:23.222521 | orchestrator | 2026-03-08 00:58:23 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state STARTED 2026-03-08 00:58:23.224398 | orchestrator | 2026-03-08 00:58:23 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:58:23.224921 | orchestrator | 2026-03-08 00:58:23 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:58:23.225094 | orchestrator | 2026-03-08 00:58:23 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:26.269851 | orchestrator | 2026-03-08 00:58:26 | INFO  | Task bdfeabcf-1767-4976-82d4-41478a9fc854 is in state SUCCESS 2026-03-08 00:58:26.270875 | orchestrator | 2026-03-08 00:58:26.270911 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-08 00:58:26.270930 | orchestrator | 2.16.14 2026-03-08 00:58:26.270943 | orchestrator | 2026-03-08 00:58:26.270950 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-08 00:58:26.270957 | orchestrator | 2026-03-08 00:58:26.270963 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-08 00:58:26.270970 | orchestrator | Sunday 08 March 2026 00:56:13 +0000 (0:00:00.630) 0:00:00.630 ********** 2026-03-08 00:58:26.270975 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:58:26.270983 | orchestrator | 2026-03-08 00:58:26.270989 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-08 00:58:26.270996 | orchestrator | Sunday 08 March 2026 00:56:13 +0000 (0:00:00.634) 0:00:01.264 ********** 2026-03-08 00:58:26.271003 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:26.271011 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:58:26.271017 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:58:26.271024 | orchestrator | 2026-03-08 00:58:26.271031 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-08 00:58:26.271037 | orchestrator | Sunday 08 March 2026 00:56:14 +0000 (0:00:00.611) 0:00:01.876 ********** 2026-03-08 00:58:26.271044 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:26.271050 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:58:26.271056 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:58:26.271063 | orchestrator | 2026-03-08 00:58:26.271069 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-08 00:58:26.271075 | orchestrator | Sunday 08 March 2026 00:56:14 +0000 (0:00:00.303) 0:00:02.179 ********** 2026-03-08 00:58:26.271082 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:26.271088 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:58:26.271095 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:58:26.271101 | orchestrator | 2026-03-08 00:58:26.271123 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-08 00:58:26.271130 | orchestrator | Sunday 08 March 2026 00:56:15 +0000 (0:00:00.848) 0:00:03.028 ********** 2026-03-08 00:58:26.271157 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:26.271164 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:58:26.271170 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:58:26.271176 | orchestrator | 2026-03-08 00:58:26.271183 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-08 00:58:26.271189 | orchestrator | Sunday 08 March 2026 00:56:15 +0000 (0:00:00.307) 0:00:03.336 ********** 2026-03-08 00:58:26.271195 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:26.271202 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:58:26.271208 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:58:26.271213 | orchestrator | 2026-03-08 00:58:26.271220 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-08 00:58:26.271226 | orchestrator | Sunday 08 March 2026 00:56:16 +0000 (0:00:00.314) 0:00:03.651 ********** 2026-03-08 00:58:26.271232 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:26.271239 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:58:26.271245 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:58:26.271251 | orchestrator | 2026-03-08 00:58:26.271257 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-08 00:58:26.271263 | orchestrator | Sunday 08 March 2026 00:56:16 +0000 (0:00:00.312) 0:00:03.963 ********** 2026-03-08 00:58:26.271270 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:26.271277 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:26.271283 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:26.271289 | orchestrator | 2026-03-08 00:58:26.271296 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-08 00:58:26.271302 | orchestrator | Sunday 08 March 2026 00:56:16 +0000 (0:00:00.510) 0:00:04.473 ********** 2026-03-08 00:58:26.271308 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:26.271315 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:58:26.271321 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:58:26.271327 | orchestrator | 2026-03-08 00:58:26.271333 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-08 00:58:26.271424 | orchestrator | Sunday 08 March 2026 00:56:17 +0000 (0:00:00.309) 0:00:04.783 ********** 2026-03-08 00:58:26.271659 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-08 00:58:26.271667 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-08 00:58:26.271673 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-08 00:58:26.271679 | orchestrator | 2026-03-08 00:58:26.271686 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-08 00:58:26.271692 | orchestrator | Sunday 08 March 2026 00:56:17 +0000 (0:00:00.644) 0:00:05.428 ********** 2026-03-08 00:58:26.271699 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:26.271705 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:58:26.271712 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:58:26.271718 | orchestrator | 2026-03-08 00:58:26.271724 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-08 00:58:26.271731 | orchestrator | Sunday 08 March 2026 00:56:18 +0000 (0:00:00.464) 0:00:05.892 ********** 2026-03-08 00:58:26.271737 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-08 00:58:26.271743 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-08 00:58:26.271750 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-08 00:58:26.271756 | orchestrator | 2026-03-08 00:58:26.271762 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-08 00:58:26.271768 | orchestrator | Sunday 08 March 2026 00:56:20 +0000 (0:00:02.133) 0:00:08.025 ********** 2026-03-08 00:58:26.271775 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-08 00:58:26.271781 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-08 00:58:26.271788 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-08 00:58:26.271804 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:26.271810 | orchestrator | 2026-03-08 00:58:26.271826 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-08 00:58:26.271833 | orchestrator | Sunday 08 March 2026 00:56:21 +0000 (0:00:00.658) 0:00:08.683 ********** 2026-03-08 00:58:26.271842 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.271851 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.271857 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.271863 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:26.271870 | orchestrator | 2026-03-08 00:58:26.271876 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-08 00:58:26.271883 | orchestrator | Sunday 08 March 2026 00:56:22 +0000 (0:00:00.849) 0:00:09.533 ********** 2026-03-08 00:58:26.271898 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.271907 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.271914 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.271920 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:26.271927 | orchestrator | 2026-03-08 00:58:26.271933 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-08 00:58:26.271939 | orchestrator | Sunday 08 March 2026 00:56:22 +0000 (0:00:00.338) 0:00:09.871 ********** 2026-03-08 00:58:26.271947 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '5bf956a943ca', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-08 00:56:19.099598', 'end': '2026-03-08 00:56:19.145683', 'delta': '0:00:00.046085', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5bf956a943ca'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-08 00:58:26.271957 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '6cae527f529f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-08 00:56:19.833229', 'end': '2026-03-08 00:56:19.862778', 'delta': '0:00:00.029549', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6cae527f529f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-08 00:58:26.271975 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'cc061b910b4e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-08 00:56:20.351465', 'end': '2026-03-08 00:56:20.387899', 'delta': '0:00:00.036434', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cc061b910b4e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-08 00:58:26.271982 | orchestrator | 2026-03-08 00:58:26.271988 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-08 00:58:26.271995 | orchestrator | Sunday 08 March 2026 00:56:22 +0000 (0:00:00.208) 0:00:10.079 ********** 2026-03-08 00:58:26.272001 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:26.272007 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:58:26.272013 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:58:26.272019 | orchestrator | 2026-03-08 00:58:26.272025 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-08 00:58:26.272032 | orchestrator | Sunday 08 March 2026 00:56:22 +0000 (0:00:00.416) 0:00:10.496 ********** 2026-03-08 00:58:26.272037 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-08 00:58:26.272043 | orchestrator | 2026-03-08 00:58:26.272053 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-08 00:58:26.272060 | orchestrator | Sunday 08 March 2026 00:56:24 +0000 (0:00:01.700) 0:00:12.196 ********** 2026-03-08 00:58:26.272067 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:26.272073 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:26.272080 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:26.272112 | orchestrator | 2026-03-08 00:58:26.272118 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-08 00:58:26.272188 | orchestrator | Sunday 08 March 2026 00:56:25 +0000 (0:00:00.318) 0:00:12.515 ********** 2026-03-08 00:58:26.272366 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:26.272374 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:26.272380 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:26.272386 | orchestrator | 2026-03-08 00:58:26.272392 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-08 00:58:26.272398 | orchestrator | Sunday 08 March 2026 00:56:25 +0000 (0:00:00.434) 0:00:12.949 ********** 2026-03-08 00:58:26.272405 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:26.272411 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:26.272418 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:26.272424 | orchestrator | 2026-03-08 00:58:26.272430 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-08 00:58:26.272436 | orchestrator | Sunday 08 March 2026 00:56:25 +0000 (0:00:00.496) 0:00:13.446 ********** 2026-03-08 00:58:26.272443 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:26.272449 | orchestrator | 2026-03-08 00:58:26.272455 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-08 00:58:26.272461 | orchestrator | Sunday 08 March 2026 00:56:26 +0000 (0:00:00.148) 0:00:13.594 ********** 2026-03-08 00:58:26.272473 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:26.272479 | orchestrator | 2026-03-08 00:58:26.272486 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-08 00:58:26.272492 | orchestrator | Sunday 08 March 2026 00:56:26 +0000 (0:00:00.289) 0:00:13.883 ********** 2026-03-08 00:58:26.272499 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:26.272505 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:26.272511 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:26.272517 | orchestrator | 2026-03-08 00:58:26.272523 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-08 00:58:26.272546 | orchestrator | Sunday 08 March 2026 00:56:26 +0000 (0:00:00.309) 0:00:14.193 ********** 2026-03-08 00:58:26.272552 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:26.272558 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:26.272564 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:26.272570 | orchestrator | 2026-03-08 00:58:26.272575 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-08 00:58:26.272581 | orchestrator | Sunday 08 March 2026 00:56:27 +0000 (0:00:00.328) 0:00:14.522 ********** 2026-03-08 00:58:26.272587 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:26.272593 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:26.272599 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:26.272605 | orchestrator | 2026-03-08 00:58:26.272612 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-08 00:58:26.272618 | orchestrator | Sunday 08 March 2026 00:56:27 +0000 (0:00:00.514) 0:00:15.036 ********** 2026-03-08 00:58:26.272624 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:26.272631 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:26.272637 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:26.272643 | orchestrator | 2026-03-08 00:58:26.272649 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-08 00:58:26.272655 | orchestrator | Sunday 08 March 2026 00:56:27 +0000 (0:00:00.333) 0:00:15.369 ********** 2026-03-08 00:58:26.272661 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:26.272667 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:26.272674 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:26.272680 | orchestrator | 2026-03-08 00:58:26.272685 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-08 00:58:26.272692 | orchestrator | Sunday 08 March 2026 00:56:28 +0000 (0:00:00.316) 0:00:15.685 ********** 2026-03-08 00:58:26.272698 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:26.272705 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:26.272711 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:26.272739 | orchestrator | 2026-03-08 00:58:26.272746 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-08 00:58:26.272752 | orchestrator | Sunday 08 March 2026 00:56:28 +0000 (0:00:00.334) 0:00:16.020 ********** 2026-03-08 00:58:26.272758 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:26.272765 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:26.272770 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:26.272776 | orchestrator | 2026-03-08 00:58:26.272783 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-08 00:58:26.272790 | orchestrator | Sunday 08 March 2026 00:56:29 +0000 (0:00:00.512) 0:00:16.532 ********** 2026-03-08 00:58:26.272797 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d02f715b--f6fc--5dd9--afa3--4d404d1973db-osd--block--d02f715b--f6fc--5dd9--afa3--4d404d1973db', 'dm-uuid-LVM-f03kY5XdcO8KIjPmgU6ez8t0FLA66q5e6bg790Rq5xMganTUcZGHGUvDXtiPEuVk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.272816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--06971c7f--d1d9--5519--989d--752a08544c4e-osd--block--06971c7f--d1d9--5519--989d--752a08544c4e', 'dm-uuid-LVM-aBixsF7VHwJvWC9cdwUNtNJgwkKQp0oeNIuSXWBWS1FFfMSG9j3hPuZReyvMCd3n'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.272823 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.272831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.272838 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.272845 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.272851 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.272874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.272883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.272889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.272916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a9457a91--34ca--5e42--9332--0f1ee38194fb-osd--block--a9457a91--34ca--5e42--9332--0f1ee38194fb', 'dm-uuid-LVM-1DgtEGOZqDrAYsIUYWXjWt4e3SxVmhLmzrC21Cb8uHjcZdNtfE2b9sZFbwNam0np'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.272926 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6', 'scsi-SQEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part1', 'scsi-SQEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part14', 'scsi-SQEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part15', 'scsi-SQEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part16', 'scsi-SQEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:26.272952 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d02f715b--f6fc--5dd9--afa3--4d404d1973db-osd--block--d02f715b--f6fc--5dd9--afa3--4d404d1973db'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mrAEW5-XpgO-ylIk-3aJm-Tg5F-lqm3-bQSDp1', 'scsi-0QEMU_QEMU_HARDDISK_9d9faa04-2f3c-436d-9a5f-1631de10dde0', 'scsi-SQEMU_QEMU_HARDDISK_9d9faa04-2f3c-436d-9a5f-1631de10dde0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:26.272960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ccaad6c6--3747--58dc--9b51--af637ea3a93d-osd--block--ccaad6c6--3747--58dc--9b51--af637ea3a93d', 'dm-uuid-LVM-aqoktTFUlq7SjIJKcG7i1ikNBv383ZINRI52RtFLOJBuoXIuLDsmN9zlb65VZXV7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.273000 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--06971c7f--d1d9--5519--989d--752a08544c4e-osd--block--06971c7f--d1d9--5519--989d--752a08544c4e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CoIbeO-WF0K-M7eU-N2ox-nLCt-t6XQ-gHHAOC', 'scsi-0QEMU_QEMU_HARDDISK_13127977-6a78-466e-81ef-45b79edafbaf', 'scsi-SQEMU_QEMU_HARDDISK_13127977-6a78-466e-81ef-45b79edafbaf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:26.273007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.273015 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f1e8e18-dbd4-42e7-a856-4b9aa5f72ffe', 'scsi-SQEMU_QEMU_HARDDISK_4f1e8e18-dbd4-42e7-a856-4b9aa5f72ffe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:26.273023 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.273031 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:26.273056 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.273064 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:26.273071 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.273083 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.273093 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.273100 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.273107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.273114 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9742d483--d5c0--528b--aa0f--657894200b45-osd--block--9742d483--d5c0--528b--aa0f--657894200b45', 'dm-uuid-LVM-U2QUDxGDRC151Udr4jM5hfm2YaN283x19epxysF51M2bfpRRaRQBoYxHcYR9gtnr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.273126 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633', 'scsi-SQEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part1', 'scsi-SQEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part14', 'scsi-SQEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part15', 'scsi-SQEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part16', 'scsi-SQEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:26.273141 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e5322502--cf2a--5eb6--8fcb--1a734f718f57-osd--block--e5322502--cf2a--5eb6--8fcb--1a734f718f57', 'dm-uuid-LVM-RRaMeRIXPIlbqQADcFEr6dO8YwR5B90PKNztrD7g57c5m6jUbbYIAolqfQ3zFJpa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.273149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a9457a91--34ca--5e42--9332--0f1ee38194fb-osd--block--a9457a91--34ca--5e42--9332--0f1ee38194fb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AyfzLZ-nDAT-KP8U-BC7i-9Gme-sH3R-MKWXQG', 'scsi-0QEMU_QEMU_HARDDISK_2c822381-711e-4b88-8f0f-ccd9d68009a2', 'scsi-SQEMU_QEMU_HARDDISK_2c822381-711e-4b88-8f0f-ccd9d68009a2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:26.273157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ccaad6c6--3747--58dc--9b51--af637ea3a93d-osd--block--ccaad6c6--3747--58dc--9b51--af637ea3a93d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HcH3QJ-5LsI-kc2v-MoPJ-2a34-l4rS-3VHXB9', 'scsi-0QEMU_QEMU_HARDDISK_272aa0da-7148-40c6-996c-fa485e579a0c', 'scsi-SQEMU_QEMU_HARDDISK_272aa0da-7148-40c6-996c-fa485e579a0c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:26.273163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.273170 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.273182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_584a8cd2-f1cf-4783-b73b-bdfda5fabfa8', 'scsi-SQEMU_QEMU_HARDDISK_584a8cd2-f1cf-4783-b73b-bdfda5fabfa8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:26.273193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.273202 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:26.273209 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:26.273216 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.273222 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.273228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.273235 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.273242 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:26.273257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7', 'scsi-SQEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part1', 'scsi-SQEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part14', 'scsi-SQEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part15', 'scsi-SQEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part16', 'scsi-SQEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:26.273325 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9742d483--d5c0--528b--aa0f--657894200b45-osd--block--9742d483--d5c0--528b--aa0f--657894200b45'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oflPUY-L2CM-wtDn-8Yeo-R4dI-ZTmC-cIevDQ', 'scsi-0QEMU_QEMU_HARDDISK_875b6ffb-6cc0-40cb-be90-c8d29b416698', 'scsi-SQEMU_QEMU_HARDDISK_875b6ffb-6cc0-40cb-be90-c8d29b416698'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:26.273332 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e5322502--cf2a--5eb6--8fcb--1a734f718f57-osd--block--e5322502--cf2a--5eb6--8fcb--1a734f718f57'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fFbuCy-h99o-f0ck-Xj07-2du6-A3pz-GYkTZk', 'scsi-0QEMU_QEMU_HARDDISK_09c0cde5-d1e2-470c-ab2a-905eda1e5751', 'scsi-SQEMU_QEMU_HARDDISK_09c0cde5-d1e2-470c-ab2a-905eda1e5751'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:26.273339 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3f7b7f4-f798-492f-86ac-7ce39be70087', 'scsi-SQEMU_QEMU_HARDDISK_c3f7b7f4-f798-492f-86ac-7ce39be70087'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:26.273351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:26.273362 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:26.273368 | orchestrator | 2026-03-08 00:58:26.273375 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-08 00:58:26.273381 | orchestrator | Sunday 08 March 2026 00:56:29 +0000 (0:00:00.560) 0:00:17.092 ********** 2026-03-08 00:58:26.273388 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d02f715b--f6fc--5dd9--afa3--4d404d1973db-osd--block--d02f715b--f6fc--5dd9--afa3--4d404d1973db', 'dm-uuid-LVM-f03kY5XdcO8KIjPmgU6ez8t0FLA66q5e6bg790Rq5xMganTUcZGHGUvDXtiPEuVk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273399 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--06971c7f--d1d9--5519--989d--752a08544c4e-osd--block--06971c7f--d1d9--5519--989d--752a08544c4e', 'dm-uuid-LVM-aBixsF7VHwJvWC9cdwUNtNJgwkKQp0oeNIuSXWBWS1FFfMSG9j3hPuZReyvMCd3n'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273406 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273413 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273420 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273437 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273444 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273453 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273460 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273466 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273473 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a9457a91--34ca--5e42--9332--0f1ee38194fb-osd--block--a9457a91--34ca--5e42--9332--0f1ee38194fb', 'dm-uuid-LVM-1DgtEGOZqDrAYsIUYWXjWt4e3SxVmhLmzrC21Cb8uHjcZdNtfE2b9sZFbwNam0np'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273495 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6', 'scsi-SQEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part1', 'scsi-SQEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part14', 'scsi-SQEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part15', 'scsi-SQEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part16', 'scsi-SQEMU_QEMU_HARDDISK_81fff5b5-671f-4cf3-9542-12bc3254aff6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273503 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ccaad6c6--3747--58dc--9b51--af637ea3a93d-osd--block--ccaad6c6--3747--58dc--9b51--af637ea3a93d', 'dm-uuid-LVM-aqoktTFUlq7SjIJKcG7i1ikNBv383ZINRI52RtFLOJBuoXIuLDsmN9zlb65VZXV7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273610 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d02f715b--f6fc--5dd9--afa3--4d404d1973db-osd--block--d02f715b--f6fc--5dd9--afa3--4d404d1973db'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mrAEW5-XpgO-ylIk-3aJm-Tg5F-lqm3-bQSDp1', 'scsi-0QEMU_QEMU_HARDDISK_9d9faa04-2f3c-436d-9a5f-1631de10dde0', 'scsi-SQEMU_QEMU_HARDDISK_9d9faa04-2f3c-436d-9a5f-1631de10dde0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273632 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273639 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--06971c7f--d1d9--5519--989d--752a08544c4e-osd--block--06971c7f--d1d9--5519--989d--752a08544c4e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CoIbeO-WF0K-M7eU-N2ox-nLCt-t6XQ-gHHAOC', 'scsi-0QEMU_QEMU_HARDDISK_13127977-6a78-466e-81ef-45b79edafbaf', 'scsi-SQEMU_QEMU_HARDDISK_13127977-6a78-466e-81ef-45b79edafbaf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273650 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f1e8e18-dbd4-42e7-a856-4b9aa5f72ffe', 'scsi-SQEMU_QEMU_HARDDISK_4f1e8e18-dbd4-42e7-a856-4b9aa5f72ffe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273656 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273663 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273674 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273681 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:26.273693 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273700 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273710 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273717 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273723 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273735 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633', 'scsi-SQEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part1', 'scsi-SQEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part14', 'scsi-SQEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part15', 'scsi-SQEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part16', 'scsi-SQEMU_QEMU_HARDDISK_02cca760-af2e-4d6c-87bb-7fb3c7fbc633-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273749 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a9457a91--34ca--5e42--9332--0f1ee38194fb-osd--block--a9457a91--34ca--5e42--9332--0f1ee38194fb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AyfzLZ-nDAT-KP8U-BC7i-9Gme-sH3R-MKWXQG', 'scsi-0QEMU_QEMU_HARDDISK_2c822381-711e-4b88-8f0f-ccd9d68009a2', 'scsi-SQEMU_QEMU_HARDDISK_2c822381-711e-4b88-8f0f-ccd9d68009a2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273756 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9742d483--d5c0--528b--aa0f--657894200b45-osd--block--9742d483--d5c0--528b--aa0f--657894200b45', 'dm-uuid-LVM-U2QUDxGDRC151Udr4jM5hfm2YaN283x19epxysF51M2bfpRRaRQBoYxHcYR9gtnr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273762 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ccaad6c6--3747--58dc--9b51--af637ea3a93d-osd--block--ccaad6c6--3747--58dc--9b51--af637ea3a93d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HcH3QJ-5LsI-kc2v-MoPJ-2a34-l4rS-3VHXB9', 'scsi-0QEMU_QEMU_HARDDISK_272aa0da-7148-40c6-996c-fa485e579a0c', 'scsi-SQEMU_QEMU_HARDDISK_272aa0da-7148-40c6-996c-fa485e579a0c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273777 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e5322502--cf2a--5eb6--8fcb--1a734f718f57-osd--block--e5322502--cf2a--5eb6--8fcb--1a734f718f57', 'dm-uuid-LVM-RRaMeRIXPIlbqQADcFEr6dO8YwR5B90PKNztrD7g57c5m6jUbbYIAolqfQ3zFJpa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273784 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_584a8cd2-f1cf-4783-b73b-bdfda5fabfa8', 'scsi-SQEMU_QEMU_HARDDISK_584a8cd2-f1cf-4783-b73b-bdfda5fabfa8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273793 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273800 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273806 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:26.273817 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273823 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273833 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273839 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273848 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273855 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273861 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273876 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7', 'scsi-SQEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part1', 'scsi-SQEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part14', 'scsi-SQEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part15', 'scsi-SQEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part16', 'scsi-SQEMU_QEMU_HARDDISK_ef0824cf-3936-4a61-9f2a-e804dfd60cf7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273886 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9742d483--d5c0--528b--aa0f--657894200b45-osd--block--9742d483--d5c0--528b--aa0f--657894200b45'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oflPUY-L2CM-wtDn-8Yeo-R4dI-ZTmC-cIevDQ', 'scsi-0QEMU_QEMU_HARDDISK_875b6ffb-6cc0-40cb-be90-c8d29b416698', 'scsi-SQEMU_QEMU_HARDDISK_875b6ffb-6cc0-40cb-be90-c8d29b416698'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273893 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e5322502--cf2a--5eb6--8fcb--1a734f718f57-osd--block--e5322502--cf2a--5eb6--8fcb--1a734f718f57'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fFbuCy-h99o-f0ck-Xj07-2du6-A3pz-GYkTZk', 'scsi-0QEMU_QEMU_HARDDISK_09c0cde5-d1e2-470c-ab2a-905eda1e5751', 'scsi-SQEMU_QEMU_HARDDISK_09c0cde5-d1e2-470c-ab2a-905eda1e5751'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273903 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3f7b7f4-f798-492f-86ac-7ce39be70087', 'scsi-SQEMU_QEMU_HARDDISK_c3f7b7f4-f798-492f-86ac-7ce39be70087'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273913 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:26.273919 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:26.273926 | orchestrator | 2026-03-08 00:58:26.273932 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-08 00:58:26.273939 | orchestrator | Sunday 08 March 2026 00:56:30 +0000 (0:00:00.619) 0:00:17.712 ********** 2026-03-08 00:58:26.273945 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:26.273952 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:58:26.273958 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:58:26.273965 | orchestrator | 2026-03-08 00:58:26.273971 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-08 00:58:26.273977 | orchestrator | Sunday 08 March 2026 00:56:30 +0000 (0:00:00.710) 0:00:18.423 ********** 2026-03-08 00:58:26.273983 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:26.273989 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:58:26.273995 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:58:26.274001 | orchestrator | 2026-03-08 00:58:26.274007 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-08 00:58:26.274054 | orchestrator | Sunday 08 March 2026 00:56:31 +0000 (0:00:00.482) 0:00:18.905 ********** 2026-03-08 00:58:26.274063 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:26.274069 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:58:26.274075 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:58:26.274081 | orchestrator | 2026-03-08 00:58:26.274093 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-08 00:58:26.274099 | orchestrator | Sunday 08 March 2026 00:56:32 +0000 (0:00:00.637) 0:00:19.542 ********** 2026-03-08 00:58:26.274105 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:26.274111 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:26.274117 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:26.274124 | orchestrator | 2026-03-08 00:58:26.274130 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-08 00:58:26.274142 | orchestrator | Sunday 08 March 2026 00:56:32 +0000 (0:00:00.347) 0:00:19.890 ********** 2026-03-08 00:58:26.274148 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:26.274155 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:26.274161 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:26.274167 | orchestrator | 2026-03-08 00:58:26.274174 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-08 00:58:26.274180 | orchestrator | Sunday 08 March 2026 00:56:32 +0000 (0:00:00.411) 0:00:20.301 ********** 2026-03-08 00:58:26.274186 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:26.274192 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:26.274198 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:26.274205 | orchestrator | 2026-03-08 00:58:26.274212 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-08 00:58:26.274218 | orchestrator | Sunday 08 March 2026 00:56:33 +0000 (0:00:00.583) 0:00:20.885 ********** 2026-03-08 00:58:26.274224 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-08 00:58:26.274231 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-08 00:58:26.274237 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-08 00:58:26.274243 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-08 00:58:26.274249 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-08 00:58:26.274256 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-08 00:58:26.274262 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-08 00:58:26.274268 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-08 00:58:26.274275 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-08 00:58:26.274281 | orchestrator | 2026-03-08 00:58:26.274287 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-08 00:58:26.274293 | orchestrator | Sunday 08 March 2026 00:56:34 +0000 (0:00:00.866) 0:00:21.752 ********** 2026-03-08 00:58:26.274300 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-08 00:58:26.274307 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-08 00:58:26.274313 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-08 00:58:26.274320 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:26.274326 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-08 00:58:26.274332 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-08 00:58:26.274339 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-08 00:58:26.274345 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:26.274352 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-08 00:58:26.274358 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-08 00:58:26.274364 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-08 00:58:26.274370 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:26.274376 | orchestrator | 2026-03-08 00:58:26.274382 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-08 00:58:26.274389 | orchestrator | Sunday 08 March 2026 00:56:34 +0000 (0:00:00.368) 0:00:22.120 ********** 2026-03-08 00:58:26.274396 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:58:26.274402 | orchestrator | 2026-03-08 00:58:26.274408 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-08 00:58:26.274415 | orchestrator | Sunday 08 March 2026 00:56:35 +0000 (0:00:00.796) 0:00:22.916 ********** 2026-03-08 00:58:26.274426 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:26.274432 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:26.274438 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:26.274444 | orchestrator | 2026-03-08 00:58:26.274450 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-08 00:58:26.274461 | orchestrator | Sunday 08 March 2026 00:56:35 +0000 (0:00:00.341) 0:00:23.258 ********** 2026-03-08 00:58:26.274467 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:26.274474 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:26.274480 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:26.274486 | orchestrator | 2026-03-08 00:58:26.274493 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-08 00:58:26.274499 | orchestrator | Sunday 08 March 2026 00:56:36 +0000 (0:00:00.361) 0:00:23.620 ********** 2026-03-08 00:58:26.274505 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:26.274511 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:26.274517 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:26.274524 | orchestrator | 2026-03-08 00:58:26.274545 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-08 00:58:26.274552 | orchestrator | Sunday 08 March 2026 00:56:36 +0000 (0:00:00.316) 0:00:23.937 ********** 2026-03-08 00:58:26.274558 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:26.274564 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:58:26.274570 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:58:26.274576 | orchestrator | 2026-03-08 00:58:26.274583 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-08 00:58:26.274589 | orchestrator | Sunday 08 March 2026 00:56:37 +0000 (0:00:00.625) 0:00:24.562 ********** 2026-03-08 00:58:26.274595 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:58:26.274601 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:58:26.274606 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:58:26.274616 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:26.274621 | orchestrator | 2026-03-08 00:58:26.274627 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-08 00:58:26.274633 | orchestrator | Sunday 08 March 2026 00:56:37 +0000 (0:00:00.386) 0:00:24.948 ********** 2026-03-08 00:58:26.274640 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:58:26.274646 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:58:26.274653 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:58:26.274659 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:26.274665 | orchestrator | 2026-03-08 00:58:26.274671 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-08 00:58:26.274676 | orchestrator | Sunday 08 March 2026 00:56:37 +0000 (0:00:00.392) 0:00:25.341 ********** 2026-03-08 00:58:26.274682 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:58:26.274689 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:58:26.274695 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:58:26.274702 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:26.274708 | orchestrator | 2026-03-08 00:58:26.274714 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-08 00:58:26.274720 | orchestrator | Sunday 08 March 2026 00:56:38 +0000 (0:00:00.391) 0:00:25.732 ********** 2026-03-08 00:58:26.274727 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:26.274733 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:58:26.274739 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:58:26.274745 | orchestrator | 2026-03-08 00:58:26.274752 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-08 00:58:26.274758 | orchestrator | Sunday 08 March 2026 00:56:38 +0000 (0:00:00.351) 0:00:26.084 ********** 2026-03-08 00:58:26.274764 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-08 00:58:26.274770 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-08 00:58:26.274777 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-08 00:58:26.274783 | orchestrator | 2026-03-08 00:58:26.274790 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-08 00:58:26.274800 | orchestrator | Sunday 08 March 2026 00:56:39 +0000 (0:00:00.487) 0:00:26.571 ********** 2026-03-08 00:58:26.274807 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-08 00:58:26.274813 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-08 00:58:26.274819 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-08 00:58:26.274826 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-08 00:58:26.274832 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-08 00:58:26.274838 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-08 00:58:26.274844 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-08 00:58:26.274850 | orchestrator | 2026-03-08 00:58:26.274856 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-08 00:58:26.274863 | orchestrator | Sunday 08 March 2026 00:56:40 +0000 (0:00:00.985) 0:00:27.557 ********** 2026-03-08 00:58:26.274869 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-08 00:58:26.274875 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-08 00:58:26.274881 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-08 00:58:26.274887 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-08 00:58:26.274894 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-08 00:58:26.274900 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-08 00:58:26.274910 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-08 00:58:26.274917 | orchestrator | 2026-03-08 00:58:26.274923 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-08 00:58:26.274929 | orchestrator | Sunday 08 March 2026 00:56:42 +0000 (0:00:02.098) 0:00:29.656 ********** 2026-03-08 00:58:26.274935 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:26.274941 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:26.274947 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-08 00:58:26.274954 | orchestrator | 2026-03-08 00:58:26.274960 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-08 00:58:26.274967 | orchestrator | Sunday 08 March 2026 00:56:42 +0000 (0:00:00.428) 0:00:30.084 ********** 2026-03-08 00:58:26.274974 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-08 00:58:26.274981 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-08 00:58:26.274992 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-08 00:58:26.274998 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-08 00:58:26.275005 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-08 00:58:26.275015 | orchestrator | 2026-03-08 00:58:26.275022 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-08 00:58:26.275028 | orchestrator | Sunday 08 March 2026 00:57:28 +0000 (0:00:45.999) 0:01:16.084 ********** 2026-03-08 00:58:26.275034 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:26.275040 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:26.275046 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:26.275053 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:26.275059 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:26.275065 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:26.275071 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-08 00:58:26.275077 | orchestrator | 2026-03-08 00:58:26.275084 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-08 00:58:26.275090 | orchestrator | Sunday 08 March 2026 00:57:54 +0000 (0:00:26.004) 0:01:42.088 ********** 2026-03-08 00:58:26.275096 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:26.275102 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:26.275108 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:26.275115 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:26.275121 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:26.275127 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:26.275133 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-08 00:58:26.275140 | orchestrator | 2026-03-08 00:58:26.275146 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-08 00:58:26.275152 | orchestrator | Sunday 08 March 2026 00:58:07 +0000 (0:00:12.560) 0:01:54.649 ********** 2026-03-08 00:58:26.275159 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:26.275165 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-08 00:58:26.275171 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-08 00:58:26.275177 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:26.275184 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-08 00:58:26.275193 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-08 00:58:26.275200 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:26.275206 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-08 00:58:26.275212 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-08 00:58:26.275218 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:26.275224 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-08 00:58:26.275231 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-08 00:58:26.275237 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:26.275244 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-08 00:58:26.275255 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-08 00:58:26.275262 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:26.275268 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-08 00:58:26.275274 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-08 00:58:26.275280 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-08 00:58:26.275286 | orchestrator | 2026-03-08 00:58:26.275293 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:58:26.275302 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-08 00:58:26.275310 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-08 00:58:26.275316 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-08 00:58:26.275322 | orchestrator | 2026-03-08 00:58:26.275328 | orchestrator | 2026-03-08 00:58:26.275335 | orchestrator | 2026-03-08 00:58:26.275342 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:58:26.275348 | orchestrator | Sunday 08 March 2026 00:58:24 +0000 (0:00:17.117) 0:02:11.767 ********** 2026-03-08 00:58:26.275354 | orchestrator | =============================================================================== 2026-03-08 00:58:26.275361 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.00s 2026-03-08 00:58:26.275367 | orchestrator | generate keys ---------------------------------------------------------- 26.00s 2026-03-08 00:58:26.275373 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.12s 2026-03-08 00:58:26.275379 | orchestrator | get keys from monitors ------------------------------------------------- 12.56s 2026-03-08 00:58:26.275385 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.13s 2026-03-08 00:58:26.275392 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.10s 2026-03-08 00:58:26.275398 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.70s 2026-03-08 00:58:26.275404 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.99s 2026-03-08 00:58:26.275410 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.87s 2026-03-08 00:58:26.275417 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.85s 2026-03-08 00:58:26.275423 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.85s 2026-03-08 00:58:26.275429 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.80s 2026-03-08 00:58:26.275436 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.71s 2026-03-08 00:58:26.275442 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.66s 2026-03-08 00:58:26.275448 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.64s 2026-03-08 00:58:26.275454 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.64s 2026-03-08 00:58:26.275461 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.63s 2026-03-08 00:58:26.275466 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.63s 2026-03-08 00:58:26.275472 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.62s 2026-03-08 00:58:26.275478 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.61s 2026-03-08 00:58:26.275485 | orchestrator | 2026-03-08 00:58:26 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:58:26.275491 | orchestrator | 2026-03-08 00:58:26 | INFO  | Task 162b9bd7-6595-46d0-a71d-d40b913440f9 is in state STARTED 2026-03-08 00:58:26.275502 | orchestrator | 2026-03-08 00:58:26 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:58:26.275509 | orchestrator | 2026-03-08 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:29.323829 | orchestrator | 2026-03-08 00:58:29 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:58:29.326263 | orchestrator | 2026-03-08 00:58:29 | INFO  | Task 162b9bd7-6595-46d0-a71d-d40b913440f9 is in state STARTED 2026-03-08 00:58:29.328282 | orchestrator | 2026-03-08 00:58:29 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:58:29.328696 | orchestrator | 2026-03-08 00:58:29 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:32.372249 | orchestrator | 2026-03-08 00:58:32 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:58:32.374329 | orchestrator | 2026-03-08 00:58:32 | INFO  | Task 162b9bd7-6595-46d0-a71d-d40b913440f9 is in state STARTED 2026-03-08 00:58:32.376375 | orchestrator | 2026-03-08 00:58:32 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:58:32.376440 | orchestrator | 2026-03-08 00:58:32 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:35.422227 | orchestrator | 2026-03-08 00:58:35 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:58:35.422316 | orchestrator | 2026-03-08 00:58:35 | INFO  | Task 162b9bd7-6595-46d0-a71d-d40b913440f9 is in state STARTED 2026-03-08 00:58:35.423160 | orchestrator | 2026-03-08 00:58:35 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:58:35.423208 | orchestrator | 2026-03-08 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:38.469232 | orchestrator | 2026-03-08 00:58:38 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:58:38.470927 | orchestrator | 2026-03-08 00:58:38 | INFO  | Task 162b9bd7-6595-46d0-a71d-d40b913440f9 is in state STARTED 2026-03-08 00:58:38.473006 | orchestrator | 2026-03-08 00:58:38 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:58:38.473070 | orchestrator | 2026-03-08 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:41.511165 | orchestrator | 2026-03-08 00:58:41 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:58:41.512574 | orchestrator | 2026-03-08 00:58:41 | INFO  | Task 162b9bd7-6595-46d0-a71d-d40b913440f9 is in state STARTED 2026-03-08 00:58:41.513939 | orchestrator | 2026-03-08 00:58:41 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:58:41.513989 | orchestrator | 2026-03-08 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:44.572772 | orchestrator | 2026-03-08 00:58:44 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:58:44.574264 | orchestrator | 2026-03-08 00:58:44 | INFO  | Task 162b9bd7-6595-46d0-a71d-d40b913440f9 is in state STARTED 2026-03-08 00:58:44.583323 | orchestrator | 2026-03-08 00:58:44 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:58:44.583390 | orchestrator | 2026-03-08 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:47.627975 | orchestrator | 2026-03-08 00:58:47 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:58:47.629102 | orchestrator | 2026-03-08 00:58:47 | INFO  | Task 162b9bd7-6595-46d0-a71d-d40b913440f9 is in state STARTED 2026-03-08 00:58:47.630161 | orchestrator | 2026-03-08 00:58:47 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:58:47.631004 | orchestrator | 2026-03-08 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:50.683368 | orchestrator | 2026-03-08 00:58:50 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state STARTED 2026-03-08 00:58:50.686243 | orchestrator | 2026-03-08 00:58:50 | INFO  | Task 162b9bd7-6595-46d0-a71d-d40b913440f9 is in state STARTED 2026-03-08 00:58:50.687834 | orchestrator | 2026-03-08 00:58:50 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:58:50.688842 | orchestrator | 2026-03-08 00:58:50 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:53.732805 | orchestrator | 2026-03-08 00:58:53 | INFO  | Task a5027b83-a664-481e-97c2-c3ab386da766 is in state SUCCESS 2026-03-08 00:58:53.733881 | orchestrator | 2026-03-08 00:58:53.733931 | orchestrator | 2026-03-08 00:58:53.733937 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 00:58:53.733943 | orchestrator | 2026-03-08 00:58:53.733947 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 00:58:53.733951 | orchestrator | Sunday 08 March 2026 00:57:04 +0000 (0:00:00.276) 0:00:00.276 ********** 2026-03-08 00:58:53.733955 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:58:53.733960 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:58:53.733964 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:58:53.733968 | orchestrator | 2026-03-08 00:58:53.733972 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 00:58:53.733976 | orchestrator | Sunday 08 March 2026 00:57:04 +0000 (0:00:00.313) 0:00:00.589 ********** 2026-03-08 00:58:53.733980 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-08 00:58:53.733984 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-08 00:58:53.733988 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-08 00:58:53.733991 | orchestrator | 2026-03-08 00:58:53.733995 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-08 00:58:53.733999 | orchestrator | 2026-03-08 00:58:53.734003 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-08 00:58:53.734009 | orchestrator | Sunday 08 March 2026 00:57:05 +0000 (0:00:00.530) 0:00:01.119 ********** 2026-03-08 00:58:53.734059 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:58:53.734068 | orchestrator | 2026-03-08 00:58:53.734075 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-08 00:58:53.734081 | orchestrator | Sunday 08 March 2026 00:57:05 +0000 (0:00:00.496) 0:00:01.615 ********** 2026-03-08 00:58:53.734110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-08 00:58:53.734150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-08 00:58:53.734159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-08 00:58:53.734167 | orchestrator | 2026-03-08 00:58:53.734171 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-08 00:58:53.734175 | orchestrator | Sunday 08 March 2026 00:57:06 +0000 (0:00:00.967) 0:00:02.583 ********** 2026-03-08 00:58:53.734179 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:58:53.734183 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:58:53.734186 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:58:53.734241 | orchestrator | 2026-03-08 00:58:53.734246 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-08 00:58:53.734250 | orchestrator | Sunday 08 March 2026 00:57:07 +0000 (0:00:00.532) 0:00:03.116 ********** 2026-03-08 00:58:53.734257 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-08 00:58:53.734292 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-08 00:58:53.734401 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-08 00:58:53.734408 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-08 00:58:53.734415 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-08 00:58:53.734421 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-08 00:58:53.734427 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-08 00:58:53.734439 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-08 00:58:53.734445 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-08 00:58:53.734451 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-08 00:58:53.734457 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-08 00:58:53.734463 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-08 00:58:53.734468 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-08 00:58:53.734516 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-08 00:58:53.734524 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-08 00:58:53.734531 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-08 00:58:53.734537 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-08 00:58:53.734545 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-08 00:58:53.734553 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-08 00:58:53.734567 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-08 00:58:53.734580 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-08 00:58:53.734587 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-08 00:58:53.734591 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-08 00:58:53.734595 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-08 00:58:53.734600 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-08 00:58:53.734606 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-08 00:58:53.734609 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-08 00:58:53.734613 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-08 00:58:53.734617 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-08 00:58:53.734621 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-08 00:58:53.734625 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-08 00:58:53.734629 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-08 00:58:53.734633 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-08 00:58:53.734640 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-08 00:58:53.734646 | orchestrator | 2026-03-08 00:58:53.734655 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-08 00:58:53.734663 | orchestrator | Sunday 08 March 2026 00:57:08 +0000 (0:00:00.723) 0:00:03.839 ********** 2026-03-08 00:58:53.734670 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:58:53.734676 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:58:53.734682 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:58:53.734687 | orchestrator | 2026-03-08 00:58:53.734692 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-08 00:58:53.734697 | orchestrator | Sunday 08 March 2026 00:57:08 +0000 (0:00:00.364) 0:00:04.203 ********** 2026-03-08 00:58:53.734711 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:53.734718 | orchestrator | 2026-03-08 00:58:53.734724 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-08 00:58:53.734730 | orchestrator | Sunday 08 March 2026 00:57:08 +0000 (0:00:00.120) 0:00:04.324 ********** 2026-03-08 00:58:53.734736 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:53.734741 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:53.734748 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:53.734754 | orchestrator | 2026-03-08 00:58:53.734761 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-08 00:58:53.734767 | orchestrator | Sunday 08 March 2026 00:57:09 +0000 (0:00:00.496) 0:00:04.820 ********** 2026-03-08 00:58:53.734773 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:58:53.734786 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:58:53.734795 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:58:53.734803 | orchestrator | 2026-03-08 00:58:53.734811 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-08 00:58:53.734817 | orchestrator | Sunday 08 March 2026 00:57:09 +0000 (0:00:00.293) 0:00:05.114 ********** 2026-03-08 00:58:53.734823 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:53.734829 | orchestrator | 2026-03-08 00:58:53.734835 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-08 00:58:53.734841 | orchestrator | Sunday 08 March 2026 00:57:09 +0000 (0:00:00.134) 0:00:05.249 ********** 2026-03-08 00:58:53.734846 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:53.734852 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:53.734857 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:53.734863 | orchestrator | 2026-03-08 00:58:53.734869 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-08 00:58:53.734875 | orchestrator | Sunday 08 March 2026 00:57:09 +0000 (0:00:00.306) 0:00:05.555 ********** 2026-03-08 00:58:53.734880 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:58:53.734886 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:58:53.734892 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:58:53.734897 | orchestrator | 2026-03-08 00:58:53.734903 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-08 00:58:53.734909 | orchestrator | Sunday 08 March 2026 00:57:10 +0000 (0:00:00.340) 0:00:05.896 ********** 2026-03-08 00:58:53.734914 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:53.734920 | orchestrator | 2026-03-08 00:58:53.734926 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-08 00:58:53.734933 | orchestrator | Sunday 08 March 2026 00:57:10 +0000 (0:00:00.414) 0:00:06.311 ********** 2026-03-08 00:58:53.734938 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:53.734950 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:53.734957 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:53.734963 | orchestrator | 2026-03-08 00:58:53.734970 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-08 00:58:53.734977 | orchestrator | Sunday 08 March 2026 00:57:10 +0000 (0:00:00.324) 0:00:06.635 ********** 2026-03-08 00:58:53.734981 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:58:53.734984 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:58:53.734988 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:58:53.734992 | orchestrator | 2026-03-08 00:58:53.734995 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-08 00:58:53.734999 | orchestrator | Sunday 08 March 2026 00:57:11 +0000 (0:00:00.328) 0:00:06.964 ********** 2026-03-08 00:58:53.735003 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:53.735007 | orchestrator | 2026-03-08 00:58:53.735011 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-08 00:58:53.735015 | orchestrator | Sunday 08 March 2026 00:57:11 +0000 (0:00:00.206) 0:00:07.170 ********** 2026-03-08 00:58:53.735019 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:53.735024 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:53.735030 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:53.735040 | orchestrator | 2026-03-08 00:58:53.735047 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-08 00:58:53.735053 | orchestrator | Sunday 08 March 2026 00:57:11 +0000 (0:00:00.295) 0:00:07.466 ********** 2026-03-08 00:58:53.735059 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:58:53.735066 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:58:53.735072 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:58:53.735077 | orchestrator | 2026-03-08 00:58:53.735083 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-08 00:58:53.735090 | orchestrator | Sunday 08 March 2026 00:57:12 +0000 (0:00:00.664) 0:00:08.131 ********** 2026-03-08 00:58:53.735096 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:53.735104 | orchestrator | 2026-03-08 00:58:53.735110 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-08 00:58:53.735123 | orchestrator | Sunday 08 March 2026 00:57:12 +0000 (0:00:00.130) 0:00:08.262 ********** 2026-03-08 00:58:53.735131 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:53.735136 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:53.735140 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:53.735144 | orchestrator | 2026-03-08 00:58:53.735149 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-08 00:58:53.735154 | orchestrator | Sunday 08 March 2026 00:57:12 +0000 (0:00:00.317) 0:00:08.579 ********** 2026-03-08 00:58:53.735158 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:58:53.735163 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:58:53.735167 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:58:53.735172 | orchestrator | 2026-03-08 00:58:53.735176 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-08 00:58:53.735181 | orchestrator | Sunday 08 March 2026 00:57:13 +0000 (0:00:00.304) 0:00:08.884 ********** 2026-03-08 00:58:53.735185 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:53.735190 | orchestrator | 2026-03-08 00:58:53.735194 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-08 00:58:53.735199 | orchestrator | Sunday 08 March 2026 00:57:13 +0000 (0:00:00.150) 0:00:09.034 ********** 2026-03-08 00:58:53.735203 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:53.735207 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:53.735212 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:53.735216 | orchestrator | 2026-03-08 00:58:53.735221 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-08 00:58:53.735232 | orchestrator | Sunday 08 March 2026 00:57:13 +0000 (0:00:00.286) 0:00:09.321 ********** 2026-03-08 00:58:53.735237 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:58:53.735242 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:58:53.735246 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:58:53.735251 | orchestrator | 2026-03-08 00:58:53.735255 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-08 00:58:53.735259 | orchestrator | Sunday 08 March 2026 00:57:14 +0000 (0:00:00.583) 0:00:09.905 ********** 2026-03-08 00:58:53.735264 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:53.735268 | orchestrator | 2026-03-08 00:58:53.735272 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-08 00:58:53.735277 | orchestrator | Sunday 08 March 2026 00:57:14 +0000 (0:00:00.130) 0:00:10.035 ********** 2026-03-08 00:58:53.735281 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:53.735286 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:53.735290 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:53.735294 | orchestrator | 2026-03-08 00:58:53.735299 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-08 00:58:53.735303 | orchestrator | Sunday 08 March 2026 00:57:14 +0000 (0:00:00.310) 0:00:10.345 ********** 2026-03-08 00:58:53.735307 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:58:53.735312 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:58:53.735317 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:58:53.735321 | orchestrator | 2026-03-08 00:58:53.735325 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-08 00:58:53.735330 | orchestrator | Sunday 08 March 2026 00:57:14 +0000 (0:00:00.340) 0:00:10.686 ********** 2026-03-08 00:58:53.735334 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:53.735339 | orchestrator | 2026-03-08 00:58:53.735343 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-08 00:58:53.735347 | orchestrator | Sunday 08 March 2026 00:57:15 +0000 (0:00:00.141) 0:00:10.827 ********** 2026-03-08 00:58:53.735352 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:53.735356 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:53.735361 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:53.735364 | orchestrator | 2026-03-08 00:58:53.735368 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-08 00:58:53.735384 | orchestrator | Sunday 08 March 2026 00:57:15 +0000 (0:00:00.548) 0:00:11.376 ********** 2026-03-08 00:58:53.735390 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:58:53.735396 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:58:53.735402 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:58:53.735409 | orchestrator | 2026-03-08 00:58:53.735420 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-08 00:58:53.735427 | orchestrator | Sunday 08 March 2026 00:57:15 +0000 (0:00:00.328) 0:00:11.705 ********** 2026-03-08 00:58:53.735434 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:53.735440 | orchestrator | 2026-03-08 00:58:53.735447 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-08 00:58:53.735452 | orchestrator | Sunday 08 March 2026 00:57:16 +0000 (0:00:00.156) 0:00:11.862 ********** 2026-03-08 00:58:53.735459 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:53.735464 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:53.735468 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:53.735472 | orchestrator | 2026-03-08 00:58:53.735501 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-08 00:58:53.735507 | orchestrator | Sunday 08 March 2026 00:57:16 +0000 (0:00:00.319) 0:00:12.181 ********** 2026-03-08 00:58:53.735514 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:58:53.735518 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:58:53.735522 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:58:53.735526 | orchestrator | 2026-03-08 00:58:53.735530 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-08 00:58:53.735534 | orchestrator | Sunday 08 March 2026 00:57:16 +0000 (0:00:00.313) 0:00:12.494 ********** 2026-03-08 00:58:53.735537 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:53.735541 | orchestrator | 2026-03-08 00:58:53.735545 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-08 00:58:53.735549 | orchestrator | Sunday 08 March 2026 00:57:16 +0000 (0:00:00.116) 0:00:12.611 ********** 2026-03-08 00:58:53.735552 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:53.735556 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:53.735560 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:53.735564 | orchestrator | 2026-03-08 00:58:53.735568 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-08 00:58:53.735573 | orchestrator | Sunday 08 March 2026 00:57:17 +0000 (0:00:00.497) 0:00:13.109 ********** 2026-03-08 00:58:53.735576 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:58:53.735581 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:58:53.735586 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:58:53.735593 | orchestrator | 2026-03-08 00:58:53.735599 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-08 00:58:53.735605 | orchestrator | Sunday 08 March 2026 00:57:18 +0000 (0:00:01.582) 0:00:14.691 ********** 2026-03-08 00:58:53.735611 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-08 00:58:53.735620 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-08 00:58:53.735631 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-08 00:58:53.735636 | orchestrator | 2026-03-08 00:58:53.735643 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-08 00:58:53.735648 | orchestrator | Sunday 08 March 2026 00:57:20 +0000 (0:00:01.977) 0:00:16.669 ********** 2026-03-08 00:58:53.735654 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-08 00:58:53.735661 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-08 00:58:53.735667 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-08 00:58:53.735673 | orchestrator | 2026-03-08 00:58:53.735687 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-08 00:58:53.735700 | orchestrator | Sunday 08 March 2026 00:57:23 +0000 (0:00:02.412) 0:00:19.081 ********** 2026-03-08 00:58:53.735706 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-08 00:58:53.735712 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-08 00:58:53.735719 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-08 00:58:53.735725 | orchestrator | 2026-03-08 00:58:53.735731 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-08 00:58:53.735737 | orchestrator | Sunday 08 March 2026 00:57:25 +0000 (0:00:02.099) 0:00:21.181 ********** 2026-03-08 00:58:53.735743 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:53.735750 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:53.735756 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:53.735762 | orchestrator | 2026-03-08 00:58:53.735769 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-08 00:58:53.735775 | orchestrator | Sunday 08 March 2026 00:57:25 +0000 (0:00:00.309) 0:00:21.490 ********** 2026-03-08 00:58:53.735781 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:53.735785 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:53.735789 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:53.735792 | orchestrator | 2026-03-08 00:58:53.735796 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-08 00:58:53.735800 | orchestrator | Sunday 08 March 2026 00:57:26 +0000 (0:00:00.283) 0:00:21.774 ********** 2026-03-08 00:58:53.735804 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:58:53.735808 | orchestrator | 2026-03-08 00:58:53.735812 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-08 00:58:53.735816 | orchestrator | Sunday 08 March 2026 00:57:26 +0000 (0:00:00.798) 0:00:22.572 ********** 2026-03-08 00:58:53.735829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-08 00:58:53.735846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-08 00:58:53.735855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-08 00:58:53.735864 | orchestrator | 2026-03-08 00:58:53.735868 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-08 00:58:53.735872 | orchestrator | Sunday 08 March 2026 00:57:28 +0000 (0:00:01.637) 0:00:24.210 ********** 2026-03-08 00:58:53.735886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-08 00:58:53.735895 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:53.735910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-08 00:58:53.735924 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:53.735935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-08 00:58:53.735941 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:53.735948 | orchestrator | 2026-03-08 00:58:53.735954 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-08 00:58:53.735960 | orchestrator | Sunday 08 March 2026 00:57:29 +0000 (0:00:00.669) 0:00:24.879 ********** 2026-03-08 00:58:53.735972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-08 00:58:53.736102 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:53.736113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-08 00:58:53.736122 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:53.736131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-08 00:58:53.736136 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:53.736139 | orchestrator | 2026-03-08 00:58:53.736143 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-08 00:58:53.736151 | orchestrator | Sunday 08 March 2026 00:57:30 +0000 (0:00:00.854) 0:00:25.733 ********** 2026-03-08 00:58:53.736162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-08 00:58:53.736184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-08 00:58:53.736201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-08 00:58:53.736219 | orchestrator | 2026-03-08 00:58:53.736227 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-08 00:58:53.736233 | orchestrator | Sunday 08 March 2026 00:57:31 +0000 (0:00:01.481) 0:00:27.215 ********** 2026-03-08 00:58:53.736240 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:53.736246 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:53.736252 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:53.736259 | orchestrator | 2026-03-08 00:58:53.736266 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-08 00:58:53.736278 | orchestrator | Sunday 08 March 2026 00:57:31 +0000 (0:00:00.288) 0:00:27.503 ********** 2026-03-08 00:58:53.736286 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:58:53.736293 | orchestrator | 2026-03-08 00:58:53.736300 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-08 00:58:53.736307 | orchestrator | Sunday 08 March 2026 00:57:32 +0000 (0:00:00.584) 0:00:28.088 ********** 2026-03-08 00:58:53.736314 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:58:53.736321 | orchestrator | 2026-03-08 00:58:53.736325 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-08 00:58:53.736329 | orchestrator | Sunday 08 March 2026 00:57:34 +0000 (0:00:02.593) 0:00:30.681 ********** 2026-03-08 00:58:53.736333 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:58:53.736337 | orchestrator | 2026-03-08 00:58:53.736340 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-08 00:58:53.736344 | orchestrator | Sunday 08 March 2026 00:57:37 +0000 (0:00:02.814) 0:00:33.496 ********** 2026-03-08 00:58:53.736348 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:58:53.736352 | orchestrator | 2026-03-08 00:58:53.736356 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-08 00:58:53.736360 | orchestrator | Sunday 08 March 2026 00:57:54 +0000 (0:00:16.421) 0:00:49.917 ********** 2026-03-08 00:58:53.736363 | orchestrator | 2026-03-08 00:58:53.736367 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-08 00:58:53.736371 | orchestrator | Sunday 08 March 2026 00:57:54 +0000 (0:00:00.073) 0:00:49.990 ********** 2026-03-08 00:58:53.736376 | orchestrator | 2026-03-08 00:58:53.736381 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-08 00:58:53.736385 | orchestrator | Sunday 08 March 2026 00:57:54 +0000 (0:00:00.068) 0:00:50.059 ********** 2026-03-08 00:58:53.736389 | orchestrator | 2026-03-08 00:58:53.736393 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-08 00:58:53.736397 | orchestrator | Sunday 08 March 2026 00:57:54 +0000 (0:00:00.074) 0:00:50.134 ********** 2026-03-08 00:58:53.736400 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:58:53.736405 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:58:53.736409 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:58:53.736412 | orchestrator | 2026-03-08 00:58:53.736416 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:58:53.736428 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-08 00:58:53.736435 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-08 00:58:53.736439 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-08 00:58:53.736443 | orchestrator | 2026-03-08 00:58:53.736446 | orchestrator | 2026-03-08 00:58:53.736450 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:58:53.736455 | orchestrator | Sunday 08 March 2026 00:58:52 +0000 (0:00:57.685) 0:01:47.819 ********** 2026-03-08 00:58:53.736458 | orchestrator | =============================================================================== 2026-03-08 00:58:53.736462 | orchestrator | horizon : Restart horizon container ------------------------------------ 57.69s 2026-03-08 00:58:53.736466 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.42s 2026-03-08 00:58:53.736470 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.81s 2026-03-08 00:58:53.736493 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.59s 2026-03-08 00:58:53.736500 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.41s 2026-03-08 00:58:53.736507 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.10s 2026-03-08 00:58:53.736513 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.98s 2026-03-08 00:58:53.736519 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.64s 2026-03-08 00:58:53.736525 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.58s 2026-03-08 00:58:53.736532 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.48s 2026-03-08 00:58:53.736538 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 0.97s 2026-03-08 00:58:53.736544 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.85s 2026-03-08 00:58:53.736550 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.80s 2026-03-08 00:58:53.736555 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.72s 2026-03-08 00:58:53.736562 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.67s 2026-03-08 00:58:53.736567 | orchestrator | horizon : Update policy file name --------------------------------------- 0.66s 2026-03-08 00:58:53.736574 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.58s 2026-03-08 00:58:53.736580 | orchestrator | horizon : Update policy file name --------------------------------------- 0.58s 2026-03-08 00:58:53.736589 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.55s 2026-03-08 00:58:53.736599 | orchestrator | horizon : Set empty custom policy --------------------------------------- 0.53s 2026-03-08 00:58:53.736605 | orchestrator | 2026-03-08 00:58:53 | INFO  | Task 162b9bd7-6595-46d0-a71d-d40b913440f9 is in state STARTED 2026-03-08 00:58:53.736616 | orchestrator | 2026-03-08 00:58:53 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:58:53.736624 | orchestrator | 2026-03-08 00:58:53 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:56.780284 | orchestrator | 2026-03-08 00:58:56 | INFO  | Task 162b9bd7-6595-46d0-a71d-d40b913440f9 is in state STARTED 2026-03-08 00:58:56.781769 | orchestrator | 2026-03-08 00:58:56 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:58:56.781807 | orchestrator | 2026-03-08 00:58:56 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:59.819750 | orchestrator | 2026-03-08 00:58:59 | INFO  | Task 162b9bd7-6595-46d0-a71d-d40b913440f9 is in state STARTED 2026-03-08 00:58:59.822864 | orchestrator | 2026-03-08 00:58:59 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:58:59.823026 | orchestrator | 2026-03-08 00:58:59 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:02.878285 | orchestrator | 2026-03-08 00:59:02 | INFO  | Task 162b9bd7-6595-46d0-a71d-d40b913440f9 is in state STARTED 2026-03-08 00:59:02.879808 | orchestrator | 2026-03-08 00:59:02 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:59:02.879876 | orchestrator | 2026-03-08 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:05.940047 | orchestrator | 2026-03-08 00:59:05 | INFO  | Task 7a7e12b2-4cb4-40a3-b5f7-a99b131c4ce7 is in state STARTED 2026-03-08 00:59:05.940309 | orchestrator | 2026-03-08 00:59:05 | INFO  | Task 162b9bd7-6595-46d0-a71d-d40b913440f9 is in state SUCCESS 2026-03-08 00:59:05.942082 | orchestrator | 2026-03-08 00:59:05 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:59:05.942118 | orchestrator | 2026-03-08 00:59:05 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:08.991086 | orchestrator | 2026-03-08 00:59:08 | INFO  | Task 7a7e12b2-4cb4-40a3-b5f7-a99b131c4ce7 is in state STARTED 2026-03-08 00:59:08.991361 | orchestrator | 2026-03-08 00:59:08 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:59:08.991389 | orchestrator | 2026-03-08 00:59:08 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:12.043644 | orchestrator | 2026-03-08 00:59:12 | INFO  | Task 7a7e12b2-4cb4-40a3-b5f7-a99b131c4ce7 is in state STARTED 2026-03-08 00:59:12.045783 | orchestrator | 2026-03-08 00:59:12 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:59:12.045964 | orchestrator | 2026-03-08 00:59:12 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:15.088670 | orchestrator | 2026-03-08 00:59:15 | INFO  | Task 7a7e12b2-4cb4-40a3-b5f7-a99b131c4ce7 is in state STARTED 2026-03-08 00:59:15.089291 | orchestrator | 2026-03-08 00:59:15 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:59:15.089345 | orchestrator | 2026-03-08 00:59:15 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:18.145119 | orchestrator | 2026-03-08 00:59:18 | INFO  | Task 7a7e12b2-4cb4-40a3-b5f7-a99b131c4ce7 is in state STARTED 2026-03-08 00:59:18.147628 | orchestrator | 2026-03-08 00:59:18 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:59:18.147695 | orchestrator | 2026-03-08 00:59:18 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:21.183399 | orchestrator | 2026-03-08 00:59:21 | INFO  | Task 7a7e12b2-4cb4-40a3-b5f7-a99b131c4ce7 is in state STARTED 2026-03-08 00:59:21.183556 | orchestrator | 2026-03-08 00:59:21 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:59:21.184063 | orchestrator | 2026-03-08 00:59:21 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:24.226599 | orchestrator | 2026-03-08 00:59:24 | INFO  | Task 7a7e12b2-4cb4-40a3-b5f7-a99b131c4ce7 is in state STARTED 2026-03-08 00:59:24.229908 | orchestrator | 2026-03-08 00:59:24 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:59:24.229974 | orchestrator | 2026-03-08 00:59:24 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:27.275498 | orchestrator | 2026-03-08 00:59:27 | INFO  | Task 7a7e12b2-4cb4-40a3-b5f7-a99b131c4ce7 is in state STARTED 2026-03-08 00:59:27.277919 | orchestrator | 2026-03-08 00:59:27 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:59:27.278010 | orchestrator | 2026-03-08 00:59:27 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:30.314502 | orchestrator | 2026-03-08 00:59:30 | INFO  | Task 7a7e12b2-4cb4-40a3-b5f7-a99b131c4ce7 is in state STARTED 2026-03-08 00:59:30.315797 | orchestrator | 2026-03-08 00:59:30 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:59:30.316743 | orchestrator | 2026-03-08 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:33.366931 | orchestrator | 2026-03-08 00:59:33 | INFO  | Task 7a7e12b2-4cb4-40a3-b5f7-a99b131c4ce7 is in state STARTED 2026-03-08 00:59:33.367895 | orchestrator | 2026-03-08 00:59:33 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:59:33.367944 | orchestrator | 2026-03-08 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:36.416523 | orchestrator | 2026-03-08 00:59:36 | INFO  | Task 7a7e12b2-4cb4-40a3-b5f7-a99b131c4ce7 is in state STARTED 2026-03-08 00:59:36.418254 | orchestrator | 2026-03-08 00:59:36 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:59:36.418312 | orchestrator | 2026-03-08 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:39.461312 | orchestrator | 2026-03-08 00:59:39 | INFO  | Task 7a7e12b2-4cb4-40a3-b5f7-a99b131c4ce7 is in state STARTED 2026-03-08 00:59:39.463292 | orchestrator | 2026-03-08 00:59:39 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:59:39.463342 | orchestrator | 2026-03-08 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:42.501391 | orchestrator | 2026-03-08 00:59:42 | INFO  | Task 7a7e12b2-4cb4-40a3-b5f7-a99b131c4ce7 is in state STARTED 2026-03-08 00:59:42.502267 | orchestrator | 2026-03-08 00:59:42 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:59:42.502356 | orchestrator | 2026-03-08 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:45.542281 | orchestrator | 2026-03-08 00:59:45 | INFO  | Task 7a7e12b2-4cb4-40a3-b5f7-a99b131c4ce7 is in state STARTED 2026-03-08 00:59:45.544525 | orchestrator | 2026-03-08 00:59:45 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:59:45.544597 | orchestrator | 2026-03-08 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:48.591496 | orchestrator | 2026-03-08 00:59:48 | INFO  | Task 7a7e12b2-4cb4-40a3-b5f7-a99b131c4ce7 is in state STARTED 2026-03-08 00:59:48.594244 | orchestrator | 2026-03-08 00:59:48 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:59:48.594328 | orchestrator | 2026-03-08 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:51.639425 | orchestrator | 2026-03-08 00:59:51 | INFO  | Task 7a7e12b2-4cb4-40a3-b5f7-a99b131c4ce7 is in state STARTED 2026-03-08 00:59:51.641135 | orchestrator | 2026-03-08 00:59:51 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state STARTED 2026-03-08 00:59:51.641177 | orchestrator | 2026-03-08 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:54.683429 | orchestrator | 2026-03-08 00:59:54 | INFO  | Task 7a7e12b2-4cb4-40a3-b5f7-a99b131c4ce7 is in state STARTED 2026-03-08 00:59:54.684776 | orchestrator | 2026-03-08 00:59:54 | INFO  | Task 15c17902-7a73-4506-9b08-089575a55111 is in state SUCCESS 2026-03-08 00:59:54.684857 | orchestrator | 2026-03-08 00:59:54.684866 | orchestrator | 2026-03-08 00:59:54.684872 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-08 00:59:54.684903 | orchestrator | 2026-03-08 00:59:54.684909 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-08 00:59:54.684916 | orchestrator | Sunday 08 March 2026 00:58:29 +0000 (0:00:00.165) 0:00:00.165 ********** 2026-03-08 00:59:54.684964 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-08 00:59:54.685001 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-08 00:59:54.685008 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-08 00:59:54.685015 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-08 00:59:54.685021 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-08 00:59:54.685028 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-08 00:59:54.685051 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-08 00:59:54.685058 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-08 00:59:54.685065 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-08 00:59:54.685071 | orchestrator | 2026-03-08 00:59:54.685077 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-08 00:59:54.685084 | orchestrator | Sunday 08 March 2026 00:58:33 +0000 (0:00:04.510) 0:00:04.675 ********** 2026-03-08 00:59:54.685090 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-08 00:59:54.685096 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-08 00:59:54.685103 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-08 00:59:54.685110 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-08 00:59:54.685116 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-08 00:59:54.685123 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-08 00:59:54.685129 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-08 00:59:54.685135 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-08 00:59:54.685141 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-08 00:59:54.685148 | orchestrator | 2026-03-08 00:59:54.685154 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-08 00:59:54.685160 | orchestrator | Sunday 08 March 2026 00:58:37 +0000 (0:00:04.180) 0:00:08.855 ********** 2026-03-08 00:59:54.685225 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-08 00:59:54.685232 | orchestrator | 2026-03-08 00:59:54.685238 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-08 00:59:54.685244 | orchestrator | Sunday 08 March 2026 00:58:38 +0000 (0:00:01.043) 0:00:09.899 ********** 2026-03-08 00:59:54.685249 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-08 00:59:54.685256 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-08 00:59:54.685262 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-08 00:59:54.685268 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-08 00:59:54.685288 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-08 00:59:54.685295 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-08 00:59:54.685310 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-08 00:59:54.685317 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-08 00:59:54.685323 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-08 00:59:54.685329 | orchestrator | 2026-03-08 00:59:54.685335 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-08 00:59:54.685341 | orchestrator | Sunday 08 March 2026 00:58:52 +0000 (0:00:13.734) 0:00:23.633 ********** 2026-03-08 00:59:54.685426 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-08 00:59:54.685433 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-08 00:59:54.685440 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-08 00:59:54.685446 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-08 00:59:54.685463 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-08 00:59:54.685470 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-08 00:59:54.685476 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-08 00:59:54.685482 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-08 00:59:54.685488 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-08 00:59:54.685494 | orchestrator | 2026-03-08 00:59:54.685500 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-08 00:59:54.685506 | orchestrator | Sunday 08 March 2026 00:58:55 +0000 (0:00:03.220) 0:00:26.854 ********** 2026-03-08 00:59:54.685513 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-08 00:59:54.685519 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-08 00:59:54.685526 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-08 00:59:54.685532 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-08 00:59:54.685538 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-08 00:59:54.685544 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-08 00:59:54.685550 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-08 00:59:54.685556 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-08 00:59:54.685562 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-08 00:59:54.685569 | orchestrator | 2026-03-08 00:59:54.685575 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:59:54.685581 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:59:54.685680 | orchestrator | 2026-03-08 00:59:54.685690 | orchestrator | 2026-03-08 00:59:54.685697 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:59:54.685703 | orchestrator | Sunday 08 March 2026 00:59:02 +0000 (0:00:07.161) 0:00:34.015 ********** 2026-03-08 00:59:54.685710 | orchestrator | =============================================================================== 2026-03-08 00:59:54.685717 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.73s 2026-03-08 00:59:54.685723 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.16s 2026-03-08 00:59:54.685730 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.51s 2026-03-08 00:59:54.685745 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.18s 2026-03-08 00:59:54.685752 | orchestrator | Check if target directories exist --------------------------------------- 3.22s 2026-03-08 00:59:54.685759 | orchestrator | Create share directory -------------------------------------------------- 1.04s 2026-03-08 00:59:54.685765 | orchestrator | 2026-03-08 00:59:54.686167 | orchestrator | 2026-03-08 00:59:54.686192 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 00:59:54.686199 | orchestrator | 2026-03-08 00:59:54.686206 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 00:59:54.686212 | orchestrator | Sunday 08 March 2026 00:57:04 +0000 (0:00:00.260) 0:00:00.260 ********** 2026-03-08 00:59:54.686219 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:59:54.686226 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:59:54.686234 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:59:54.686466 | orchestrator | 2026-03-08 00:59:54.686474 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 00:59:54.686517 | orchestrator | Sunday 08 March 2026 00:57:05 +0000 (0:00:00.315) 0:00:00.576 ********** 2026-03-08 00:59:54.686525 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-08 00:59:54.686532 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-08 00:59:54.686538 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-08 00:59:54.686544 | orchestrator | 2026-03-08 00:59:54.686559 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-08 00:59:54.686565 | orchestrator | 2026-03-08 00:59:54.686571 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-08 00:59:54.686577 | orchestrator | Sunday 08 March 2026 00:57:05 +0000 (0:00:00.468) 0:00:01.045 ********** 2026-03-08 00:59:54.686584 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:59:54.686590 | orchestrator | 2026-03-08 00:59:54.686596 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-08 00:59:54.686602 | orchestrator | Sunday 08 March 2026 00:57:06 +0000 (0:00:00.578) 0:00:01.623 ********** 2026-03-08 00:59:54.686612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:54.686621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:54.686701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:54.686716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-08 00:59:54.686725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-08 00:59:54.686731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-08 00:59:54.686738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:54.686744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:54.686757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:54.686763 | orchestrator | 2026-03-08 00:59:54.686769 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-08 00:59:54.686779 | orchestrator | Sunday 08 March 2026 00:57:07 +0000 (0:00:01.761) 0:00:03.384 ********** 2026-03-08 00:59:54.686786 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:54.686792 | orchestrator | 2026-03-08 00:59:54.686798 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-08 00:59:54.686817 | orchestrator | Sunday 08 March 2026 00:57:07 +0000 (0:00:00.132) 0:00:03.517 ********** 2026-03-08 00:59:54.686823 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:54.686836 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:59:54.686843 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:59:54.686849 | orchestrator | 2026-03-08 00:59:54.686854 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-08 00:59:54.686861 | orchestrator | Sunday 08 March 2026 00:57:08 +0000 (0:00:00.458) 0:00:03.976 ********** 2026-03-08 00:59:54.686868 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 00:59:54.686874 | orchestrator | 2026-03-08 00:59:54.686880 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-08 00:59:54.686886 | orchestrator | Sunday 08 March 2026 00:57:09 +0000 (0:00:00.841) 0:00:04.818 ********** 2026-03-08 00:59:54.686892 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:59:54.686898 | orchestrator | 2026-03-08 00:59:54.686904 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-08 00:59:54.686910 | orchestrator | Sunday 08 March 2026 00:57:09 +0000 (0:00:00.516) 0:00:05.334 ********** 2026-03-08 00:59:54.686917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:54.687001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:54.687028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:54.687035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-08 00:59:54.687045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-08 00:59:54.687052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-08 00:59:54.687058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:54.687069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:54.687076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:54.687082 | orchestrator | 2026-03-08 00:59:54.687089 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-08 00:59:54.687096 | orchestrator | Sunday 08 March 2026 00:57:13 +0000 (0:00:03.476) 0:00:08.811 ********** 2026-03-08 00:59:54.687111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-08 00:59:54.687119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:59:54.687126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:59:54.687136 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:54.687143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-08 00:59:54.687150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:59:54.687162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:59:54.687169 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:59:54.687179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-08 00:59:54.687187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:59:54.687198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:59:54.687205 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:59:54.687211 | orchestrator | 2026-03-08 00:59:54.687217 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-08 00:59:54.687225 | orchestrator | Sunday 08 March 2026 00:57:13 +0000 (0:00:00.547) 0:00:09.359 ********** 2026-03-08 00:59:54.687232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-08 00:59:54.687243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:59:54.687253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:59:54.687260 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:54.687267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-08 00:59:54.687278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:59:54.687285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:59:54.687291 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:59:54.687303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-08 00:59:54.687321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:59:54.687327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:59:54.687337 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:59:54.687408 | orchestrator | 2026-03-08 00:59:54.687416 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-08 00:59:54.687423 | orchestrator | Sunday 08 March 2026 00:57:14 +0000 (0:00:00.864) 0:00:10.224 ********** 2026-03-08 00:59:54.687429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:54.687437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:54.687452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:54.687465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-08 00:59:54.687472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-08 00:59:54.687478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-08 00:59:54.687486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:54.687493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:54.687503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:54.687511 | orchestrator | 2026-03-08 00:59:54.687517 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-08 00:59:54.687523 | orchestrator | Sunday 08 March 2026 00:57:18 +0000 (0:00:03.497) 0:00:13.722 ********** 2026-03-08 00:59:54.687538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:54.687545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:59:54.687552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:54.687559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:59:54.687573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:54.687584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:59:54.687590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:54.687597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:54.687604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:54.687610 | orchestrator | 2026-03-08 00:59:54.687616 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-08 00:59:54.687623 | orchestrator | Sunday 08 March 2026 00:57:23 +0000 (0:00:05.734) 0:00:19.457 ********** 2026-03-08 00:59:54.687629 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:59:54.687635 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:59:54.687641 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:59:54.687647 | orchestrator | 2026-03-08 00:59:54.687653 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-08 00:59:54.687659 | orchestrator | Sunday 08 March 2026 00:57:25 +0000 (0:00:01.536) 0:00:20.993 ********** 2026-03-08 00:59:54.687665 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:54.687671 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:59:54.687677 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:59:54.687683 | orchestrator | 2026-03-08 00:59:54.687688 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-08 00:59:54.687701 | orchestrator | Sunday 08 March 2026 00:57:26 +0000 (0:00:00.557) 0:00:21.551 ********** 2026-03-08 00:59:54.687707 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:54.687714 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:59:54.687719 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:59:54.687725 | orchestrator | 2026-03-08 00:59:54.687731 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-08 00:59:54.687737 | orchestrator | Sunday 08 March 2026 00:57:26 +0000 (0:00:00.354) 0:00:21.905 ********** 2026-03-08 00:59:54.687744 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:54.687750 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:59:54.687756 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:59:54.687763 | orchestrator | 2026-03-08 00:59:54.687769 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-08 00:59:54.687775 | orchestrator | Sunday 08 March 2026 00:57:26 +0000 (0:00:00.514) 0:00:22.420 ********** 2026-03-08 00:59:54.687785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-08 00:59:54.687792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:59:54.687799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:59:54.687805 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:54.687812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-08 00:59:54.687827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:59:54.687837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:59:54.687843 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:59:54.687850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-08 00:59:54.687857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:59:54.687863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:59:54.687876 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:59:54.687883 | orchestrator | 2026-03-08 00:59:54.687890 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-08 00:59:54.687897 | orchestrator | Sunday 08 March 2026 00:57:27 +0000 (0:00:00.707) 0:00:23.127 ********** 2026-03-08 00:59:54.687903 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:54.687909 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:59:54.687914 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:59:54.687920 | orchestrator | 2026-03-08 00:59:54.687926 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-08 00:59:54.687932 | orchestrator | Sunday 08 March 2026 00:57:27 +0000 (0:00:00.291) 0:00:23.419 ********** 2026-03-08 00:59:54.687938 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-08 00:59:54.687948 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-08 00:59:54.687954 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-08 00:59:54.687960 | orchestrator | 2026-03-08 00:59:54.687966 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-08 00:59:54.687973 | orchestrator | Sunday 08 March 2026 00:57:29 +0000 (0:00:01.595) 0:00:25.014 ********** 2026-03-08 00:59:54.687979 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 00:59:54.687985 | orchestrator | 2026-03-08 00:59:54.687991 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-08 00:59:54.687997 | orchestrator | Sunday 08 March 2026 00:57:30 +0000 (0:00:01.172) 0:00:26.187 ********** 2026-03-08 00:59:54.688004 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:54.688010 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:59:54.688015 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:59:54.688021 | orchestrator | 2026-03-08 00:59:54.688027 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-08 00:59:54.688036 | orchestrator | Sunday 08 March 2026 00:57:31 +0000 (0:00:00.826) 0:00:27.013 ********** 2026-03-08 00:59:54.688042 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 00:59:54.688048 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-08 00:59:54.688054 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-08 00:59:54.688059 | orchestrator | 2026-03-08 00:59:54.688065 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-08 00:59:54.688071 | orchestrator | Sunday 08 March 2026 00:57:32 +0000 (0:00:01.012) 0:00:28.026 ********** 2026-03-08 00:59:54.688077 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:59:54.688083 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:59:54.688088 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:59:54.688094 | orchestrator | 2026-03-08 00:59:54.688099 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-08 00:59:54.688105 | orchestrator | Sunday 08 March 2026 00:57:32 +0000 (0:00:00.309) 0:00:28.335 ********** 2026-03-08 00:59:54.688110 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-08 00:59:54.688116 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-08 00:59:54.688122 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-08 00:59:54.688127 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-08 00:59:54.688133 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-08 00:59:54.688139 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-08 00:59:54.688145 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-08 00:59:54.688157 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-08 00:59:54.688163 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-08 00:59:54.688168 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-08 00:59:54.688174 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-08 00:59:54.688180 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-08 00:59:54.688186 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-08 00:59:54.688192 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-08 00:59:54.688197 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-08 00:59:54.688203 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-08 00:59:54.688209 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-08 00:59:54.688215 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-08 00:59:54.688220 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-08 00:59:54.688226 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-08 00:59:54.688232 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-08 00:59:54.688238 | orchestrator | 2026-03-08 00:59:54.688244 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-08 00:59:54.688249 | orchestrator | Sunday 08 March 2026 00:57:41 +0000 (0:00:08.717) 0:00:37.053 ********** 2026-03-08 00:59:54.688255 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-08 00:59:54.688261 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-08 00:59:54.688267 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-08 00:59:54.688273 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-08 00:59:54.688279 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-08 00:59:54.688289 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-08 00:59:54.688295 | orchestrator | 2026-03-08 00:59:54.688300 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-08 00:59:54.688306 | orchestrator | Sunday 08 March 2026 00:57:44 +0000 (0:00:02.930) 0:00:39.983 ********** 2026-03-08 00:59:54.688316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:54.688327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:54.688333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:54.688340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-08 00:59:54.688364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-08 00:59:54.688374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-08 00:59:54.688384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:54.688390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:54.688396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:54.688402 | orchestrator | 2026-03-08 00:59:54.688408 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-08 00:59:54.688414 | orchestrator | Sunday 08 March 2026 00:57:46 +0000 (0:00:02.343) 0:00:42.327 ********** 2026-03-08 00:59:54.688420 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:54.688426 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:59:54.688432 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:59:54.688438 | orchestrator | 2026-03-08 00:59:54.688444 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-08 00:59:54.688450 | orchestrator | Sunday 08 March 2026 00:57:47 +0000 (0:00:00.302) 0:00:42.629 ********** 2026-03-08 00:59:54.688457 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:59:54.688462 | orchestrator | 2026-03-08 00:59:54.688468 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-08 00:59:54.688474 | orchestrator | Sunday 08 March 2026 00:57:49 +0000 (0:00:02.233) 0:00:44.863 ********** 2026-03-08 00:59:54.688481 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:59:54.688487 | orchestrator | 2026-03-08 00:59:54.688493 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-08 00:59:54.688499 | orchestrator | Sunday 08 March 2026 00:57:51 +0000 (0:00:02.260) 0:00:47.123 ********** 2026-03-08 00:59:54.688506 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:59:54.688512 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:59:54.688517 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:59:54.688523 | orchestrator | 2026-03-08 00:59:54.688529 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-08 00:59:54.688540 | orchestrator | Sunday 08 March 2026 00:57:52 +0000 (0:00:00.986) 0:00:48.110 *******2026-03-08 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:54.688547 | orchestrator | *** 2026-03-08 00:59:54.688553 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:59:54.688559 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:59:54.688566 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:59:54.688572 | orchestrator | 2026-03-08 00:59:54.688583 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-08 00:59:54.688590 | orchestrator | Sunday 08 March 2026 00:57:52 +0000 (0:00:00.332) 0:00:48.442 ********** 2026-03-08 00:59:54.688596 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:54.688602 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:59:54.688607 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:59:54.688614 | orchestrator | 2026-03-08 00:59:54.688620 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-08 00:59:54.688626 | orchestrator | Sunday 08 March 2026 00:57:53 +0000 (0:00:00.330) 0:00:48.772 ********** 2026-03-08 00:59:54.688632 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:59:54.688638 | orchestrator | 2026-03-08 00:59:54.688647 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-08 00:59:54.688653 | orchestrator | Sunday 08 March 2026 00:58:08 +0000 (0:00:15.703) 0:01:04.475 ********** 2026-03-08 00:59:54.688659 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:59:54.688666 | orchestrator | 2026-03-08 00:59:54.688672 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-08 00:59:54.688678 | orchestrator | Sunday 08 March 2026 00:58:19 +0000 (0:00:10.757) 0:01:15.232 ********** 2026-03-08 00:59:54.688685 | orchestrator | 2026-03-08 00:59:54.688690 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-08 00:59:54.688697 | orchestrator | Sunday 08 March 2026 00:58:19 +0000 (0:00:00.083) 0:01:15.316 ********** 2026-03-08 00:59:54.688703 | orchestrator | 2026-03-08 00:59:54.688709 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-08 00:59:54.688714 | orchestrator | Sunday 08 March 2026 00:58:19 +0000 (0:00:00.073) 0:01:15.389 ********** 2026-03-08 00:59:54.688720 | orchestrator | 2026-03-08 00:59:54.688726 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-08 00:59:54.688732 | orchestrator | Sunday 08 March 2026 00:58:19 +0000 (0:00:00.082) 0:01:15.472 ********** 2026-03-08 00:59:54.688738 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:59:54.688744 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:59:54.688750 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:59:54.688756 | orchestrator | 2026-03-08 00:59:54.688761 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-08 00:59:54.688768 | orchestrator | Sunday 08 March 2026 00:58:47 +0000 (0:00:27.366) 0:01:42.839 ********** 2026-03-08 00:59:54.688774 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:59:54.688780 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:59:54.688786 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:59:54.688792 | orchestrator | 2026-03-08 00:59:54.688798 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-08 00:59:54.688803 | orchestrator | Sunday 08 March 2026 00:58:57 +0000 (0:00:10.450) 0:01:53.289 ********** 2026-03-08 00:59:54.688809 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:59:54.688815 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:59:54.688821 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:59:54.688826 | orchestrator | 2026-03-08 00:59:54.688832 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-08 00:59:54.688838 | orchestrator | Sunday 08 March 2026 00:59:05 +0000 (0:00:07.674) 0:02:00.963 ********** 2026-03-08 00:59:54.688845 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:59:54.688850 | orchestrator | 2026-03-08 00:59:54.688856 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-08 00:59:54.688862 | orchestrator | Sunday 08 March 2026 00:59:06 +0000 (0:00:00.740) 0:02:01.703 ********** 2026-03-08 00:59:54.688869 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:59:54.688875 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:59:54.688881 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:59:54.688887 | orchestrator | 2026-03-08 00:59:54.688893 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-08 00:59:54.688905 | orchestrator | Sunday 08 March 2026 00:59:06 +0000 (0:00:00.732) 0:02:02.436 ********** 2026-03-08 00:59:54.688910 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:59:54.688916 | orchestrator | 2026-03-08 00:59:54.688922 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-08 00:59:54.688929 | orchestrator | Sunday 08 March 2026 00:59:08 +0000 (0:00:01.879) 0:02:04.316 ********** 2026-03-08 00:59:54.688935 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-08 00:59:54.688941 | orchestrator | 2026-03-08 00:59:54.688947 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-08 00:59:54.688953 | orchestrator | Sunday 08 March 2026 00:59:20 +0000 (0:00:11.248) 0:02:15.564 ********** 2026-03-08 00:59:54.688959 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-08 00:59:54.688965 | orchestrator | 2026-03-08 00:59:54.688971 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-08 00:59:54.688978 | orchestrator | Sunday 08 March 2026 00:59:43 +0000 (0:00:23.765) 0:02:39.329 ********** 2026-03-08 00:59:54.688984 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-08 00:59:54.688990 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-08 00:59:54.688997 | orchestrator | 2026-03-08 00:59:54.689003 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-08 00:59:54.689010 | orchestrator | Sunday 08 March 2026 00:59:49 +0000 (0:00:05.556) 0:02:44.885 ********** 2026-03-08 00:59:54.689016 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:54.689021 | orchestrator | 2026-03-08 00:59:54.689027 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-08 00:59:54.689037 | orchestrator | Sunday 08 March 2026 00:59:49 +0000 (0:00:00.119) 0:02:45.005 ********** 2026-03-08 00:59:54.689044 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:54.689050 | orchestrator | 2026-03-08 00:59:54.689056 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-08 00:59:54.689062 | orchestrator | Sunday 08 March 2026 00:59:49 +0000 (0:00:00.108) 0:02:45.114 ********** 2026-03-08 00:59:54.689068 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:54.689073 | orchestrator | 2026-03-08 00:59:54.689080 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-08 00:59:54.689086 | orchestrator | Sunday 08 March 2026 00:59:49 +0000 (0:00:00.118) 0:02:45.232 ********** 2026-03-08 00:59:54.689091 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:54.689098 | orchestrator | 2026-03-08 00:59:54.689104 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-08 00:59:54.689110 | orchestrator | Sunday 08 March 2026 00:59:50 +0000 (0:00:00.404) 0:02:45.636 ********** 2026-03-08 00:59:54.689116 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:59:54.689122 | orchestrator | 2026-03-08 00:59:54.689127 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-08 00:59:54.689137 | orchestrator | Sunday 08 March 2026 00:59:52 +0000 (0:00:02.871) 0:02:48.508 ********** 2026-03-08 00:59:54.689142 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:54.689149 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:59:54.689154 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:59:54.689161 | orchestrator | 2026-03-08 00:59:54.689167 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:59:54.689174 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-08 00:59:54.689181 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-08 00:59:54.689187 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-08 00:59:54.689198 | orchestrator | 2026-03-08 00:59:54.689204 | orchestrator | 2026-03-08 00:59:54.689209 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:59:54.689215 | orchestrator | Sunday 08 March 2026 00:59:53 +0000 (0:00:00.421) 0:02:48.929 ********** 2026-03-08 00:59:54.689221 | orchestrator | =============================================================================== 2026-03-08 00:59:54.689227 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 27.37s 2026-03-08 00:59:54.689233 | orchestrator | service-ks-register : keystone | Creating services --------------------- 23.77s 2026-03-08 00:59:54.689239 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.70s 2026-03-08 00:59:54.689246 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.25s 2026-03-08 00:59:54.689252 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.76s 2026-03-08 00:59:54.689258 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.45s 2026-03-08 00:59:54.689264 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.72s 2026-03-08 00:59:54.689270 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.67s 2026-03-08 00:59:54.689276 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.73s 2026-03-08 00:59:54.689282 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 5.56s 2026-03-08 00:59:54.689287 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.50s 2026-03-08 00:59:54.689293 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.48s 2026-03-08 00:59:54.689300 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.93s 2026-03-08 00:59:54.689306 | orchestrator | keystone : Creating default user role ----------------------------------- 2.87s 2026-03-08 00:59:54.689311 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.34s 2026-03-08 00:59:54.689318 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.26s 2026-03-08 00:59:54.689323 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.23s 2026-03-08 00:59:54.689330 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.88s 2026-03-08 00:59:54.689336 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.76s 2026-03-08 00:59:54.689342 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.60s 2026-03-08 00:59:57.713277 | orchestrator | 2026-03-08 00:59:57 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 00:59:57.713898 | orchestrator | 2026-03-08 00:59:57 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 00:59:57.716696 | orchestrator | 2026-03-08 00:59:57 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 00:59:57.717304 | orchestrator | 2026-03-08 00:59:57 | INFO  | Task 7a7e12b2-4cb4-40a3-b5f7-a99b131c4ce7 is in state STARTED 2026-03-08 00:59:57.718757 | orchestrator | 2026-03-08 00:59:57 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 00:59:57.718802 | orchestrator | 2026-03-08 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:00.746228 | orchestrator | 2026-03-08 01:00:00 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:00:00.746387 | orchestrator | 2026-03-08 01:00:00 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:00:00.746722 | orchestrator | 2026-03-08 01:00:00 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:00:00.750165 | orchestrator | 2026-03-08 01:00:00 | INFO  | Task 7a7e12b2-4cb4-40a3-b5f7-a99b131c4ce7 is in state STARTED 2026-03-08 01:00:00.751765 | orchestrator | 2026-03-08 01:00:00 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:00:00.752112 | orchestrator | 2026-03-08 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:03.809175 | orchestrator | 2026-03-08 01:00:03 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:00:03.809272 | orchestrator | 2026-03-08 01:00:03 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:00:03.809996 | orchestrator | 2026-03-08 01:00:03 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:00:03.811992 | orchestrator | 2026-03-08 01:00:03 | INFO  | Task 7a7e12b2-4cb4-40a3-b5f7-a99b131c4ce7 is in state STARTED 2026-03-08 01:00:03.812874 | orchestrator | 2026-03-08 01:00:03 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:00:03.812920 | orchestrator | 2026-03-08 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:06.866851 | orchestrator | 2026-03-08 01:00:06 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:00:06.870183 | orchestrator | 2026-03-08 01:00:06 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:00:06.871513 | orchestrator | 2026-03-08 01:00:06 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:00:06.873097 | orchestrator | 2026-03-08 01:00:06 | INFO  | Task 7c6000f5-80a2-4565-b014-19807b4beae5 is in state STARTED 2026-03-08 01:00:06.875611 | orchestrator | 2026-03-08 01:00:06 | INFO  | Task 7a7e12b2-4cb4-40a3-b5f7-a99b131c4ce7 is in state SUCCESS 2026-03-08 01:00:06.876770 | orchestrator | 2026-03-08 01:00:06 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:00:06.876807 | orchestrator | 2026-03-08 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:09.930987 | orchestrator | 2026-03-08 01:00:09 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:00:09.932739 | orchestrator | 2026-03-08 01:00:09 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:00:09.935174 | orchestrator | 2026-03-08 01:00:09 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:00:09.937375 | orchestrator | 2026-03-08 01:00:09 | INFO  | Task 7c6000f5-80a2-4565-b014-19807b4beae5 is in state STARTED 2026-03-08 01:00:09.938779 | orchestrator | 2026-03-08 01:00:09 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:00:09.938819 | orchestrator | 2026-03-08 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:12.992201 | orchestrator | 2026-03-08 01:00:12 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:00:12.992270 | orchestrator | 2026-03-08 01:00:12 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:00:12.992847 | orchestrator | 2026-03-08 01:00:12 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:00:12.994647 | orchestrator | 2026-03-08 01:00:12 | INFO  | Task 7c6000f5-80a2-4565-b014-19807b4beae5 is in state STARTED 2026-03-08 01:00:12.996260 | orchestrator | 2026-03-08 01:00:13 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:00:12.996346 | orchestrator | 2026-03-08 01:00:13 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:16.062371 | orchestrator | 2026-03-08 01:00:16 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:00:16.062447 | orchestrator | 2026-03-08 01:00:16 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:00:16.064603 | orchestrator | 2026-03-08 01:00:16 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:00:16.065945 | orchestrator | 2026-03-08 01:00:16 | INFO  | Task 7c6000f5-80a2-4565-b014-19807b4beae5 is in state STARTED 2026-03-08 01:00:16.067790 | orchestrator | 2026-03-08 01:00:16 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:00:16.067828 | orchestrator | 2026-03-08 01:00:16 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:19.110907 | orchestrator | 2026-03-08 01:00:19 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:00:19.112482 | orchestrator | 2026-03-08 01:00:19 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:00:19.115132 | orchestrator | 2026-03-08 01:00:19 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:00:19.118218 | orchestrator | 2026-03-08 01:00:19 | INFO  | Task 7c6000f5-80a2-4565-b014-19807b4beae5 is in state STARTED 2026-03-08 01:00:19.120198 | orchestrator | 2026-03-08 01:00:19 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:00:19.120234 | orchestrator | 2026-03-08 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:22.160213 | orchestrator | 2026-03-08 01:00:22 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:00:22.161710 | orchestrator | 2026-03-08 01:00:22 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:00:22.163113 | orchestrator | 2026-03-08 01:00:22 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:00:22.164354 | orchestrator | 2026-03-08 01:00:22 | INFO  | Task 7c6000f5-80a2-4565-b014-19807b4beae5 is in state STARTED 2026-03-08 01:00:22.165997 | orchestrator | 2026-03-08 01:00:22 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:00:22.166060 | orchestrator | 2026-03-08 01:00:22 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:25.204854 | orchestrator | 2026-03-08 01:00:25 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:00:25.204926 | orchestrator | 2026-03-08 01:00:25 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:00:25.204932 | orchestrator | 2026-03-08 01:00:25 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:00:25.206549 | orchestrator | 2026-03-08 01:00:25 | INFO  | Task 7c6000f5-80a2-4565-b014-19807b4beae5 is in state STARTED 2026-03-08 01:00:25.207269 | orchestrator | 2026-03-08 01:00:25 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:00:25.207360 | orchestrator | 2026-03-08 01:00:25 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:28.250930 | orchestrator | 2026-03-08 01:00:28 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:00:28.251909 | orchestrator | 2026-03-08 01:00:28 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:00:28.252853 | orchestrator | 2026-03-08 01:00:28 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:00:28.253462 | orchestrator | 2026-03-08 01:00:28 | INFO  | Task 7c6000f5-80a2-4565-b014-19807b4beae5 is in state STARTED 2026-03-08 01:00:28.256218 | orchestrator | 2026-03-08 01:00:28 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:00:28.256687 | orchestrator | 2026-03-08 01:00:28 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:31.293988 | orchestrator | 2026-03-08 01:00:31 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:00:31.295863 | orchestrator | 2026-03-08 01:00:31 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:00:31.297735 | orchestrator | 2026-03-08 01:00:31 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:00:31.298832 | orchestrator | 2026-03-08 01:00:31 | INFO  | Task 7c6000f5-80a2-4565-b014-19807b4beae5 is in state STARTED 2026-03-08 01:00:31.300117 | orchestrator | 2026-03-08 01:00:31 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:00:31.300364 | orchestrator | 2026-03-08 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:34.331456 | orchestrator | 2026-03-08 01:00:34 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:00:34.331585 | orchestrator | 2026-03-08 01:00:34 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:00:34.331604 | orchestrator | 2026-03-08 01:00:34 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:00:34.332471 | orchestrator | 2026-03-08 01:00:34 | INFO  | Task 7c6000f5-80a2-4565-b014-19807b4beae5 is in state STARTED 2026-03-08 01:00:34.332964 | orchestrator | 2026-03-08 01:00:34 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:00:34.333012 | orchestrator | 2026-03-08 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:37.360751 | orchestrator | 2026-03-08 01:00:37 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:00:37.361104 | orchestrator | 2026-03-08 01:00:37 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:00:37.361561 | orchestrator | 2026-03-08 01:00:37 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:00:37.363669 | orchestrator | 2026-03-08 01:00:37 | INFO  | Task 7c6000f5-80a2-4565-b014-19807b4beae5 is in state STARTED 2026-03-08 01:00:37.364342 | orchestrator | 2026-03-08 01:00:37 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:00:37.364373 | orchestrator | 2026-03-08 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:40.393819 | orchestrator | 2026-03-08 01:00:40 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:00:40.394007 | orchestrator | 2026-03-08 01:00:40 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:00:40.394691 | orchestrator | 2026-03-08 01:00:40 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:00:40.395556 | orchestrator | 2026-03-08 01:00:40 | INFO  | Task 7c6000f5-80a2-4565-b014-19807b4beae5 is in state STARTED 2026-03-08 01:00:40.396015 | orchestrator | 2026-03-08 01:00:40 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:00:40.396069 | orchestrator | 2026-03-08 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:43.429902 | orchestrator | 2026-03-08 01:00:43 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:00:43.430404 | orchestrator | 2026-03-08 01:00:43 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:00:43.431184 | orchestrator | 2026-03-08 01:00:43 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:00:43.431955 | orchestrator | 2026-03-08 01:00:43 | INFO  | Task 7c6000f5-80a2-4565-b014-19807b4beae5 is in state STARTED 2026-03-08 01:00:43.432815 | orchestrator | 2026-03-08 01:00:43 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:00:43.432850 | orchestrator | 2026-03-08 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:46.470679 | orchestrator | 2026-03-08 01:00:46 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:00:46.471043 | orchestrator | 2026-03-08 01:00:46 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:00:46.471892 | orchestrator | 2026-03-08 01:00:46 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:00:46.472978 | orchestrator | 2026-03-08 01:00:46 | INFO  | Task 7c6000f5-80a2-4565-b014-19807b4beae5 is in state STARTED 2026-03-08 01:00:46.474575 | orchestrator | 2026-03-08 01:00:46 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:00:46.474622 | orchestrator | 2026-03-08 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:49.500641 | orchestrator | 2026-03-08 01:00:49 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:00:49.500848 | orchestrator | 2026-03-08 01:00:49 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:00:49.502873 | orchestrator | 2026-03-08 01:00:49 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:00:49.503389 | orchestrator | 2026-03-08 01:00:49 | INFO  | Task 7c6000f5-80a2-4565-b014-19807b4beae5 is in state STARTED 2026-03-08 01:00:49.503705 | orchestrator | 2026-03-08 01:00:49 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:00:49.503863 | orchestrator | 2026-03-08 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:52.529879 | orchestrator | 2026-03-08 01:00:52 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:00:52.530369 | orchestrator | 2026-03-08 01:00:52 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:00:52.531072 | orchestrator | 2026-03-08 01:00:52 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:00:52.531749 | orchestrator | 2026-03-08 01:00:52 | INFO  | Task 7c6000f5-80a2-4565-b014-19807b4beae5 is in state STARTED 2026-03-08 01:00:52.532327 | orchestrator | 2026-03-08 01:00:52 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:00:52.532390 | orchestrator | 2026-03-08 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:55.553297 | orchestrator | 2026-03-08 01:00:55 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:00:55.553455 | orchestrator | 2026-03-08 01:00:55 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:00:55.554525 | orchestrator | 2026-03-08 01:00:55 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:00:55.554959 | orchestrator | 2026-03-08 01:00:55 | INFO  | Task 7c6000f5-80a2-4565-b014-19807b4beae5 is in state STARTED 2026-03-08 01:00:55.556573 | orchestrator | 2026-03-08 01:00:55 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:00:55.556613 | orchestrator | 2026-03-08 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:58.580451 | orchestrator | 2026-03-08 01:00:58 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:00:58.582598 | orchestrator | 2026-03-08 01:00:58 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:00:58.582697 | orchestrator | 2026-03-08 01:00:58 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:00:58.582710 | orchestrator | 2026-03-08 01:00:58 | INFO  | Task 7c6000f5-80a2-4565-b014-19807b4beae5 is in state STARTED 2026-03-08 01:00:58.582722 | orchestrator | 2026-03-08 01:00:58 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:00:58.582733 | orchestrator | 2026-03-08 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:01.613418 | orchestrator | 2026-03-08 01:01:01 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:01:01.613624 | orchestrator | 2026-03-08 01:01:01 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:01:01.614268 | orchestrator | 2026-03-08 01:01:01 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:01:01.616330 | orchestrator | 2026-03-08 01:01:01 | INFO  | Task 7c6000f5-80a2-4565-b014-19807b4beae5 is in state STARTED 2026-03-08 01:01:01.617113 | orchestrator | 2026-03-08 01:01:01 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:01:01.617139 | orchestrator | 2026-03-08 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:04.642665 | orchestrator | 2026-03-08 01:01:04 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:01:04.642779 | orchestrator | 2026-03-08 01:01:04 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:01:04.643251 | orchestrator | 2026-03-08 01:01:04 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:01:04.643977 | orchestrator | 2026-03-08 01:01:04 | INFO  | Task 7c6000f5-80a2-4565-b014-19807b4beae5 is in state STARTED 2026-03-08 01:01:04.645936 | orchestrator | 2026-03-08 01:01:04 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:01:04.646002 | orchestrator | 2026-03-08 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:07.668514 | orchestrator | 2026-03-08 01:01:07 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:01:07.668628 | orchestrator | 2026-03-08 01:01:07 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:01:07.669114 | orchestrator | 2026-03-08 01:01:07 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:01:07.669820 | orchestrator | 2026-03-08 01:01:07 | INFO  | Task 7c6000f5-80a2-4565-b014-19807b4beae5 is in state STARTED 2026-03-08 01:01:07.670351 | orchestrator | 2026-03-08 01:01:07 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:01:07.670384 | orchestrator | 2026-03-08 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:10.697579 | orchestrator | 2026-03-08 01:01:10 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:01:10.697771 | orchestrator | 2026-03-08 01:01:10 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:01:10.698345 | orchestrator | 2026-03-08 01:01:10 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:01:10.698893 | orchestrator | 2026-03-08 01:01:10 | INFO  | Task 7c6000f5-80a2-4565-b014-19807b4beae5 is in state STARTED 2026-03-08 01:01:10.699530 | orchestrator | 2026-03-08 01:01:10 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:01:10.699561 | orchestrator | 2026-03-08 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:13.782566 | orchestrator | 2026-03-08 01:01:13 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:01:13.782840 | orchestrator | 2026-03-08 01:01:13 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:01:13.783837 | orchestrator | 2026-03-08 01:01:13 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:01:13.784415 | orchestrator | 2026-03-08 01:01:13 | INFO  | Task 7c6000f5-80a2-4565-b014-19807b4beae5 is in state STARTED 2026-03-08 01:01:13.784885 | orchestrator | 2026-03-08 01:01:13 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:01:13.784912 | orchestrator | 2026-03-08 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:16.817291 | orchestrator | 2026-03-08 01:01:16 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:01:16.819374 | orchestrator | 2026-03-08 01:01:16 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:01:16.821035 | orchestrator | 2026-03-08 01:01:16 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:01:16.822586 | orchestrator | 2026-03-08 01:01:16 | INFO  | Task 7c6000f5-80a2-4565-b014-19807b4beae5 is in state STARTED 2026-03-08 01:01:16.823749 | orchestrator | 2026-03-08 01:01:16 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:01:16.823889 | orchestrator | 2026-03-08 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:19.855066 | orchestrator | 2026-03-08 01:01:19 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:01:19.855386 | orchestrator | 2026-03-08 01:01:19 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:01:19.856128 | orchestrator | 2026-03-08 01:01:19 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:01:19.856776 | orchestrator | 2026-03-08 01:01:19 | INFO  | Task 7c6000f5-80a2-4565-b014-19807b4beae5 is in state STARTED 2026-03-08 01:01:19.857542 | orchestrator | 2026-03-08 01:01:19 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:01:19.857567 | orchestrator | 2026-03-08 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:22.890866 | orchestrator | 2026-03-08 01:01:22 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:01:22.891111 | orchestrator | 2026-03-08 01:01:22 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:01:22.891928 | orchestrator | 2026-03-08 01:01:22 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:01:22.892700 | orchestrator | 2026-03-08 01:01:22 | INFO  | Task 7c6000f5-80a2-4565-b014-19807b4beae5 is in state STARTED 2026-03-08 01:01:22.893481 | orchestrator | 2026-03-08 01:01:22 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:01:22.893526 | orchestrator | 2026-03-08 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:25.916126 | orchestrator | 2026-03-08 01:01:25 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:01:25.916303 | orchestrator | 2026-03-08 01:01:25 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:01:25.916813 | orchestrator | 2026-03-08 01:01:25 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:01:25.917478 | orchestrator | 2026-03-08 01:01:25 | INFO  | Task 7c6000f5-80a2-4565-b014-19807b4beae5 is in state SUCCESS 2026-03-08 01:01:25.917799 | orchestrator | 2026-03-08 01:01:25.917813 | orchestrator | 2026-03-08 01:01:25.917835 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-08 01:01:25.917840 | orchestrator | 2026-03-08 01:01:25.917844 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-08 01:01:25.917849 | orchestrator | Sunday 08 March 2026 00:59:07 +0000 (0:00:00.240) 0:00:00.240 ********** 2026-03-08 01:01:25.917854 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-08 01:01:25.917860 | orchestrator | 2026-03-08 01:01:25.917864 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-08 01:01:25.917868 | orchestrator | Sunday 08 March 2026 00:59:07 +0000 (0:00:00.268) 0:00:00.509 ********** 2026-03-08 01:01:25.917873 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-08 01:01:25.917877 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-08 01:01:25.917882 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-08 01:01:25.917887 | orchestrator | 2026-03-08 01:01:25.917891 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-08 01:01:25.917896 | orchestrator | Sunday 08 March 2026 00:59:09 +0000 (0:00:01.314) 0:00:01.823 ********** 2026-03-08 01:01:25.917900 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-08 01:01:25.917904 | orchestrator | 2026-03-08 01:01:25.917908 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-08 01:01:25.917922 | orchestrator | Sunday 08 March 2026 00:59:10 +0000 (0:00:01.489) 0:00:03.313 ********** 2026-03-08 01:01:25.917927 | orchestrator | changed: [testbed-manager] 2026-03-08 01:01:25.917931 | orchestrator | 2026-03-08 01:01:25.917935 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-08 01:01:25.917940 | orchestrator | Sunday 08 March 2026 00:59:11 +0000 (0:00:00.997) 0:00:04.310 ********** 2026-03-08 01:01:25.917944 | orchestrator | changed: [testbed-manager] 2026-03-08 01:01:25.917948 | orchestrator | 2026-03-08 01:01:25.917952 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-08 01:01:25.917956 | orchestrator | Sunday 08 March 2026 00:59:12 +0000 (0:00:01.039) 0:00:05.349 ********** 2026-03-08 01:01:25.917960 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-08 01:01:25.917964 | orchestrator | ok: [testbed-manager] 2026-03-08 01:01:25.917969 | orchestrator | 2026-03-08 01:01:25.917973 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-08 01:01:25.917977 | orchestrator | Sunday 08 March 2026 00:59:55 +0000 (0:00:42.967) 0:00:48.317 ********** 2026-03-08 01:01:25.917981 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-08 01:01:25.917986 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-08 01:01:25.917991 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-08 01:01:25.917995 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-08 01:01:25.917999 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-08 01:01:25.918003 | orchestrator | 2026-03-08 01:01:25.918010 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-08 01:01:25.918103 | orchestrator | Sunday 08 March 2026 00:59:59 +0000 (0:00:03.986) 0:00:52.303 ********** 2026-03-08 01:01:25.918171 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-08 01:01:25.918178 | orchestrator | 2026-03-08 01:01:25.918185 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-08 01:01:25.918192 | orchestrator | Sunday 08 March 2026 01:00:00 +0000 (0:00:00.406) 0:00:52.709 ********** 2026-03-08 01:01:25.918199 | orchestrator | skipping: [testbed-manager] 2026-03-08 01:01:25.918207 | orchestrator | 2026-03-08 01:01:25.918214 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-08 01:01:25.918219 | orchestrator | Sunday 08 March 2026 01:00:00 +0000 (0:00:00.094) 0:00:52.804 ********** 2026-03-08 01:01:25.918230 | orchestrator | skipping: [testbed-manager] 2026-03-08 01:01:25.918234 | orchestrator | 2026-03-08 01:01:25.918238 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-08 01:01:25.918243 | orchestrator | Sunday 08 March 2026 01:00:00 +0000 (0:00:00.432) 0:00:53.237 ********** 2026-03-08 01:01:25.918247 | orchestrator | changed: [testbed-manager] 2026-03-08 01:01:25.918251 | orchestrator | 2026-03-08 01:01:25.918255 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-08 01:01:25.918259 | orchestrator | Sunday 08 March 2026 01:00:01 +0000 (0:00:01.311) 0:00:54.548 ********** 2026-03-08 01:01:25.918263 | orchestrator | changed: [testbed-manager] 2026-03-08 01:01:25.918267 | orchestrator | 2026-03-08 01:01:25.918272 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-08 01:01:25.918276 | orchestrator | Sunday 08 March 2026 01:00:02 +0000 (0:00:00.724) 0:00:55.273 ********** 2026-03-08 01:01:25.918280 | orchestrator | changed: [testbed-manager] 2026-03-08 01:01:25.918284 | orchestrator | 2026-03-08 01:01:25.918288 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-08 01:01:25.918292 | orchestrator | Sunday 08 March 2026 01:00:03 +0000 (0:00:00.607) 0:00:55.880 ********** 2026-03-08 01:01:25.918296 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-08 01:01:25.918301 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-08 01:01:25.918305 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-08 01:01:25.918309 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-08 01:01:25.918313 | orchestrator | 2026-03-08 01:01:25.918317 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:01:25.918321 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 01:01:25.918327 | orchestrator | 2026-03-08 01:01:25.918331 | orchestrator | 2026-03-08 01:01:25.918343 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:01:25.918347 | orchestrator | Sunday 08 March 2026 01:00:04 +0000 (0:00:01.680) 0:00:57.560 ********** 2026-03-08 01:01:25.918351 | orchestrator | =============================================================================== 2026-03-08 01:01:25.918356 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.97s 2026-03-08 01:01:25.918360 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.99s 2026-03-08 01:01:25.918364 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.68s 2026-03-08 01:01:25.918368 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.49s 2026-03-08 01:01:25.918372 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.31s 2026-03-08 01:01:25.918376 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.31s 2026-03-08 01:01:25.918380 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.04s 2026-03-08 01:01:25.918384 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.00s 2026-03-08 01:01:25.918388 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.72s 2026-03-08 01:01:25.918392 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.61s 2026-03-08 01:01:25.918397 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.43s 2026-03-08 01:01:25.918401 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.41s 2026-03-08 01:01:25.918410 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.27s 2026-03-08 01:01:25.918414 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.09s 2026-03-08 01:01:25.918418 | orchestrator | 2026-03-08 01:01:25.918422 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-08 01:01:25.918426 | orchestrator | 2.16.14 2026-03-08 01:01:25.918431 | orchestrator | 2026-03-08 01:01:25.918439 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-08 01:01:25.918443 | orchestrator | 2026-03-08 01:01:25.918447 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-08 01:01:25.918451 | orchestrator | Sunday 08 March 2026 01:00:09 +0000 (0:00:00.269) 0:00:00.269 ********** 2026-03-08 01:01:25.918456 | orchestrator | changed: [testbed-manager] 2026-03-08 01:01:25.918460 | orchestrator | 2026-03-08 01:01:25.918464 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-08 01:01:25.918468 | orchestrator | Sunday 08 March 2026 01:00:11 +0000 (0:00:02.314) 0:00:02.583 ********** 2026-03-08 01:01:25.918472 | orchestrator | changed: [testbed-manager] 2026-03-08 01:01:25.918476 | orchestrator | 2026-03-08 01:01:25.918482 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-08 01:01:25.918488 | orchestrator | Sunday 08 March 2026 01:00:13 +0000 (0:00:01.189) 0:00:03.773 ********** 2026-03-08 01:01:25.918495 | orchestrator | changed: [testbed-manager] 2026-03-08 01:01:25.918500 | orchestrator | 2026-03-08 01:01:25.918504 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-08 01:01:25.918508 | orchestrator | Sunday 08 March 2026 01:00:14 +0000 (0:00:01.093) 0:00:04.866 ********** 2026-03-08 01:01:25.918512 | orchestrator | changed: [testbed-manager] 2026-03-08 01:01:25.918517 | orchestrator | 2026-03-08 01:01:25.918521 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-08 01:01:25.918525 | orchestrator | Sunday 08 March 2026 01:00:15 +0000 (0:00:01.240) 0:00:06.106 ********** 2026-03-08 01:01:25.918529 | orchestrator | changed: [testbed-manager] 2026-03-08 01:01:25.918533 | orchestrator | 2026-03-08 01:01:25.918537 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-08 01:01:25.918541 | orchestrator | Sunday 08 March 2026 01:00:16 +0000 (0:00:01.122) 0:00:07.228 ********** 2026-03-08 01:01:25.918545 | orchestrator | changed: [testbed-manager] 2026-03-08 01:01:25.918549 | orchestrator | 2026-03-08 01:01:25.918553 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-08 01:01:25.918558 | orchestrator | Sunday 08 March 2026 01:00:17 +0000 (0:00:01.152) 0:00:08.381 ********** 2026-03-08 01:01:25.918562 | orchestrator | changed: [testbed-manager] 2026-03-08 01:01:25.918566 | orchestrator | 2026-03-08 01:01:25.918570 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-08 01:01:25.918574 | orchestrator | Sunday 08 March 2026 01:00:19 +0000 (0:00:02.045) 0:00:10.426 ********** 2026-03-08 01:01:25.918578 | orchestrator | changed: [testbed-manager] 2026-03-08 01:01:25.918582 | orchestrator | 2026-03-08 01:01:25.918586 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-08 01:01:25.918590 | orchestrator | Sunday 08 March 2026 01:00:21 +0000 (0:00:01.238) 0:00:11.665 ********** 2026-03-08 01:01:25.918594 | orchestrator | changed: [testbed-manager] 2026-03-08 01:01:25.918598 | orchestrator | 2026-03-08 01:01:25.918602 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-08 01:01:25.918606 | orchestrator | Sunday 08 March 2026 01:01:01 +0000 (0:00:40.071) 0:00:51.736 ********** 2026-03-08 01:01:25.918610 | orchestrator | skipping: [testbed-manager] 2026-03-08 01:01:25.918615 | orchestrator | 2026-03-08 01:01:25.918619 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-08 01:01:25.918623 | orchestrator | 2026-03-08 01:01:25.918627 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-08 01:01:25.918631 | orchestrator | Sunday 08 March 2026 01:01:01 +0000 (0:00:00.133) 0:00:51.870 ********** 2026-03-08 01:01:25.918635 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:01:25.918639 | orchestrator | 2026-03-08 01:01:25.918643 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-08 01:01:25.918647 | orchestrator | 2026-03-08 01:01:25.918651 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-08 01:01:25.918656 | orchestrator | Sunday 08 March 2026 01:01:12 +0000 (0:00:11.499) 0:01:03.369 ********** 2026-03-08 01:01:25.918664 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:01:25.918668 | orchestrator | 2026-03-08 01:01:25.918675 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-08 01:01:25.918680 | orchestrator | 2026-03-08 01:01:25.918684 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-08 01:01:25.918688 | orchestrator | Sunday 08 March 2026 01:01:24 +0000 (0:00:11.467) 0:01:14.837 ********** 2026-03-08 01:01:25.918692 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:01:25.918696 | orchestrator | 2026-03-08 01:01:25.918700 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:01:25.918704 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 01:01:25.918709 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:01:25.918713 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:01:25.918718 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:01:25.918722 | orchestrator | 2026-03-08 01:01:25.918726 | orchestrator | 2026-03-08 01:01:25.918730 | orchestrator | 2026-03-08 01:01:25.918734 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:01:25.918738 | orchestrator | Sunday 08 March 2026 01:01:25 +0000 (0:00:01.088) 0:01:15.926 ********** 2026-03-08 01:01:25.918742 | orchestrator | =============================================================================== 2026-03-08 01:01:25.918750 | orchestrator | Create admin user ------------------------------------------------------ 40.07s 2026-03-08 01:01:25.918754 | orchestrator | Restart ceph manager service ------------------------------------------- 24.06s 2026-03-08 01:01:25.918758 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.31s 2026-03-08 01:01:25.918762 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.05s 2026-03-08 01:01:25.918766 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.24s 2026-03-08 01:01:25.918770 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.24s 2026-03-08 01:01:25.918774 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.19s 2026-03-08 01:01:25.918779 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.15s 2026-03-08 01:01:25.918783 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.12s 2026-03-08 01:01:25.918787 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.09s 2026-03-08 01:01:25.918791 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.13s 2026-03-08 01:01:25.918796 | orchestrator | 2026-03-08 01:01:25 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:01:25.918802 | orchestrator | 2026-03-08 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:28.943511 | orchestrator | 2026-03-08 01:01:28 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:01:28.943760 | orchestrator | 2026-03-08 01:01:28 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:01:28.944694 | orchestrator | 2026-03-08 01:01:28 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:01:28.945638 | orchestrator | 2026-03-08 01:01:28 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:01:28.945687 | orchestrator | 2026-03-08 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:31.976084 | orchestrator | 2026-03-08 01:01:31 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:01:31.976283 | orchestrator | 2026-03-08 01:01:31 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:01:31.976639 | orchestrator | 2026-03-08 01:01:31 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:01:31.977474 | orchestrator | 2026-03-08 01:01:31 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:01:31.977526 | orchestrator | 2026-03-08 01:01:31 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:35.038425 | orchestrator | 2026-03-08 01:01:35 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:01:35.038708 | orchestrator | 2026-03-08 01:01:35 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:01:35.040723 | orchestrator | 2026-03-08 01:01:35 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:01:35.041303 | orchestrator | 2026-03-08 01:01:35 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:01:35.041473 | orchestrator | 2026-03-08 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:38.069347 | orchestrator | 2026-03-08 01:01:38 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:01:38.069905 | orchestrator | 2026-03-08 01:01:38 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:01:38.070848 | orchestrator | 2026-03-08 01:01:38 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:01:38.071762 | orchestrator | 2026-03-08 01:01:38 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:01:38.071795 | orchestrator | 2026-03-08 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:41.125950 | orchestrator | 2026-03-08 01:01:41 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:01:41.126296 | orchestrator | 2026-03-08 01:01:41 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:01:41.127205 | orchestrator | 2026-03-08 01:01:41 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:01:41.128848 | orchestrator | 2026-03-08 01:01:41 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:01:41.128891 | orchestrator | 2026-03-08 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:44.174378 | orchestrator | 2026-03-08 01:01:44 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:01:44.174505 | orchestrator | 2026-03-08 01:01:44 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:01:44.175150 | orchestrator | 2026-03-08 01:01:44 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:01:44.175865 | orchestrator | 2026-03-08 01:01:44 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:01:44.175890 | orchestrator | 2026-03-08 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:47.204240 | orchestrator | 2026-03-08 01:01:47 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:01:47.204780 | orchestrator | 2026-03-08 01:01:47 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:01:47.205751 | orchestrator | 2026-03-08 01:01:47 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:01:47.206836 | orchestrator | 2026-03-08 01:01:47 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:01:47.206887 | orchestrator | 2026-03-08 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:50.247055 | orchestrator | 2026-03-08 01:01:50 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state STARTED 2026-03-08 01:01:50.247472 | orchestrator | 2026-03-08 01:01:50 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:01:50.248125 | orchestrator | 2026-03-08 01:01:50 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:01:50.248789 | orchestrator | 2026-03-08 01:01:50 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:01:50.248800 | orchestrator | 2026-03-08 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:53.273974 | orchestrator | 2026-03-08 01:01:53.274136 | orchestrator | 2026-03-08 01:01:53 | INFO  | Task ee79fef3-580e-448d-8ab2-dc8905b74ab0 is in state SUCCESS 2026-03-08 01:01:53.275663 | orchestrator | 2026-03-08 01:01:53.275718 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:01:53.275728 | orchestrator | 2026-03-08 01:01:53.275735 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:01:53.275743 | orchestrator | Sunday 08 March 2026 00:59:58 +0000 (0:00:00.241) 0:00:00.241 ********** 2026-03-08 01:01:53.275749 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:01:53.275756 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:01:53.275763 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:01:53.275769 | orchestrator | 2026-03-08 01:01:53.275776 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:01:53.275782 | orchestrator | Sunday 08 March 2026 00:59:58 +0000 (0:00:00.268) 0:00:00.509 ********** 2026-03-08 01:01:53.275788 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-08 01:01:53.275795 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-08 01:01:53.275801 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-08 01:01:53.275808 | orchestrator | 2026-03-08 01:01:53.275814 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-08 01:01:53.275821 | orchestrator | 2026-03-08 01:01:53.275827 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-08 01:01:53.275833 | orchestrator | Sunday 08 March 2026 00:59:59 +0000 (0:00:00.548) 0:00:01.057 ********** 2026-03-08 01:01:53.275840 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:01:53.275846 | orchestrator | 2026-03-08 01:01:53.275853 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-08 01:01:53.275859 | orchestrator | Sunday 08 March 2026 00:59:59 +0000 (0:00:00.687) 0:00:01.744 ********** 2026-03-08 01:01:53.275866 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-08 01:01:53.275872 | orchestrator | 2026-03-08 01:01:53.275891 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-08 01:01:53.275902 | orchestrator | Sunday 08 March 2026 01:00:03 +0000 (0:00:03.571) 0:00:05.316 ********** 2026-03-08 01:01:53.275932 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-08 01:01:53.275945 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-08 01:01:53.275957 | orchestrator | 2026-03-08 01:01:53.275969 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-08 01:01:53.275990 | orchestrator | Sunday 08 March 2026 01:00:10 +0000 (0:00:07.091) 0:00:12.407 ********** 2026-03-08 01:01:53.276001 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-08 01:01:53.276013 | orchestrator | 2026-03-08 01:01:53.276024 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-08 01:01:53.276043 | orchestrator | Sunday 08 March 2026 01:00:14 +0000 (0:00:03.703) 0:00:16.111 ********** 2026-03-08 01:01:53.276075 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-08 01:01:53.276176 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-08 01:01:53.276191 | orchestrator | 2026-03-08 01:01:53.276202 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-08 01:01:53.276213 | orchestrator | Sunday 08 March 2026 01:00:18 +0000 (0:00:03.987) 0:00:20.099 ********** 2026-03-08 01:01:53.276234 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-08 01:01:53.276262 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-08 01:01:53.276273 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-08 01:01:53.276284 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-08 01:01:53.276296 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-08 01:01:53.276306 | orchestrator | 2026-03-08 01:01:53.276317 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-08 01:01:53.276329 | orchestrator | Sunday 08 March 2026 01:00:32 +0000 (0:00:14.338) 0:00:34.437 ********** 2026-03-08 01:01:53.276341 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-08 01:01:53.276352 | orchestrator | 2026-03-08 01:01:53.276363 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-08 01:01:53.276376 | orchestrator | Sunday 08 March 2026 01:00:35 +0000 (0:00:03.379) 0:00:37.816 ********** 2026-03-08 01:01:53.276391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:53.276422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:53.276436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:53.276463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.276477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.276488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.276502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.276509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.276516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.276527 | orchestrator | 2026-03-08 01:01:53.276537 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-08 01:01:53.276548 | orchestrator | Sunday 08 March 2026 01:00:37 +0000 (0:00:01.707) 0:00:39.523 ********** 2026-03-08 01:01:53.276564 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-08 01:01:53.276575 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-08 01:01:53.276585 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-08 01:01:53.276595 | orchestrator | 2026-03-08 01:01:53.276605 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-08 01:01:53.276616 | orchestrator | Sunday 08 March 2026 01:00:38 +0000 (0:00:01.253) 0:00:40.777 ********** 2026-03-08 01:01:53.276626 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:01:53.276637 | orchestrator | 2026-03-08 01:01:53.276648 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-08 01:01:53.276659 | orchestrator | Sunday 08 March 2026 01:00:39 +0000 (0:00:00.141) 0:00:40.919 ********** 2026-03-08 01:01:53.276668 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:01:53.276679 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:01:53.276689 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:01:53.276699 | orchestrator | 2026-03-08 01:01:53.276714 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-08 01:01:53.276723 | orchestrator | Sunday 08 March 2026 01:00:39 +0000 (0:00:00.727) 0:00:41.646 ********** 2026-03-08 01:01:53.276734 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:01:53.276745 | orchestrator | 2026-03-08 01:01:53.276755 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-08 01:01:53.276765 | orchestrator | Sunday 08 March 2026 01:00:40 +0000 (0:00:01.016) 0:00:42.663 ********** 2026-03-08 01:01:53.276775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:53.276793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:53.276812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:53.276894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.276916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.276929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.277052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.277066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.277087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.277116 | orchestrator | 2026-03-08 01:01:53.277128 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-08 01:01:53.277140 | orchestrator | Sunday 08 March 2026 01:00:44 +0000 (0:00:03.580) 0:00:46.243 ********** 2026-03-08 01:01:53.277153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-08 01:01:53.277171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 01:01:53.277182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:01:53.277191 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:01:53.277209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-08 01:01:53.277228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 01:01:53.277239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:01:53.277249 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:01:53.277262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-08 01:01:53.277272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 01:01:53.277282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:01:53.277297 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:01:53.277308 | orchestrator | 2026-03-08 01:01:53.277325 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-08 01:01:53.277334 | orchestrator | Sunday 08 March 2026 01:00:45 +0000 (0:00:01.112) 0:00:47.355 ********** 2026-03-08 01:01:53.277344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-08 01:01:53.277354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 01:01:53.277363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:01:53.277373 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:01:53.277387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-08 01:01:53.277399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 01:01:53.277426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:01:53.277436 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:01:53.277446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-08 01:01:53.277456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 01:01:53.277469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:01:53.277480 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:01:53.277491 | orchestrator | 2026-03-08 01:01:53.277503 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-08 01:01:53.277512 | orchestrator | Sunday 08 March 2026 01:00:47 +0000 (0:00:01.932) 0:00:49.288 ********** 2026-03-08 01:01:53.277522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:53.277544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:53.277554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:53.277565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.277580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.277592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.277613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.277624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.277634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.277643 | orchestrator | 2026-03-08 01:01:53.277653 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-08 01:01:53.277665 | orchestrator | Sunday 08 March 2026 01:00:50 +0000 (0:00:03.515) 0:00:52.804 ********** 2026-03-08 01:01:53.277675 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:01:53.277685 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:01:53.277696 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:01:53.277705 | orchestrator | 2026-03-08 01:01:53.277715 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-08 01:01:53.277725 | orchestrator | Sunday 08 March 2026 01:00:53 +0000 (0:00:02.400) 0:00:55.204 ********** 2026-03-08 01:01:53.277735 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 01:01:53.277746 | orchestrator | 2026-03-08 01:01:53.277757 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-08 01:01:53.277769 | orchestrator | Sunday 08 March 2026 01:00:54 +0000 (0:00:01.412) 0:00:56.617 ********** 2026-03-08 01:01:53.277780 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:01:53.277791 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:01:53.277800 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:01:53.277910 | orchestrator | 2026-03-08 01:01:53.277925 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-08 01:01:53.277937 | orchestrator | Sunday 08 March 2026 01:00:55 +0000 (0:00:00.716) 0:00:57.333 ********** 2026-03-08 01:01:53.277955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:53.277982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:53.277994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:53.278006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.278056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.278070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.278085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.278170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.278185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.278195 | orchestrator | 2026-03-08 01:01:53.278206 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-08 01:01:53.278217 | orchestrator | Sunday 08 March 2026 01:01:05 +0000 (0:00:09.990) 0:01:07.323 ********** 2026-03-08 01:01:53.278228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-08 01:01:53.278244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 01:01:53.278261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:01:53.278272 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:01:53.278289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-08 01:01:53.278300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 01:01:53.278312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:01:53.278323 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:01:53.278333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-08 01:01:53.278353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 01:01:53.278364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:01:53.278375 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:01:53.278386 | orchestrator | 2026-03-08 01:01:53.278397 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-08 01:01:53.278407 | orchestrator | Sunday 08 March 2026 01:01:06 +0000 (0:00:00.783) 0:01:08.107 ********** 2026-03-08 01:01:53.278424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:53.278436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:53.278451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:53.278469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.278482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.278499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.278510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.278521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.278532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:53.278554 | orchestrator | 2026-03-08 01:01:53.278566 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-08 01:01:53.278584 | orchestrator | Sunday 08 March 2026 01:01:09 +0000 (0:00:03.403) 0:01:11.510 ********** 2026-03-08 01:01:53.278606 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:01:53.278623 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:01:53.278634 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:01:53.278645 | orchestrator | 2026-03-08 01:01:53.278659 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-08 01:01:53.278678 | orchestrator | Sunday 08 March 2026 01:01:10 +0000 (0:00:00.479) 0:01:11.989 ********** 2026-03-08 01:01:53.278697 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:01:53.278713 | orchestrator | 2026-03-08 01:01:53.278724 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-08 01:01:53.278736 | orchestrator | Sunday 08 March 2026 01:01:12 +0000 (0:00:01.991) 0:01:13.981 ********** 2026-03-08 01:01:53.278754 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:01:53.278764 | orchestrator | 2026-03-08 01:01:53.278775 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-08 01:01:53.278786 | orchestrator | Sunday 08 March 2026 01:01:13 +0000 (0:00:01.902) 0:01:15.883 ********** 2026-03-08 01:01:53.278796 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:01:53.278806 | orchestrator | 2026-03-08 01:01:53.278817 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-08 01:01:53.278827 | orchestrator | Sunday 08 March 2026 01:01:25 +0000 (0:00:11.994) 0:01:27.878 ********** 2026-03-08 01:01:53.278838 | orchestrator | 2026-03-08 01:01:53.278848 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-08 01:01:53.278859 | orchestrator | Sunday 08 March 2026 01:01:26 +0000 (0:00:00.080) 0:01:27.959 ********** 2026-03-08 01:01:53.278868 | orchestrator | 2026-03-08 01:01:53.278881 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-08 01:01:53.278892 | orchestrator | Sunday 08 March 2026 01:01:26 +0000 (0:00:00.109) 0:01:28.068 ********** 2026-03-08 01:01:53.278903 | orchestrator | 2026-03-08 01:01:53.278914 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-08 01:01:53.278925 | orchestrator | Sunday 08 March 2026 01:01:26 +0000 (0:00:00.131) 0:01:28.200 ********** 2026-03-08 01:01:53.278935 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:01:53.278944 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:01:53.278955 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:01:53.278965 | orchestrator | 2026-03-08 01:01:53.278976 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-08 01:01:53.278987 | orchestrator | Sunday 08 March 2026 01:01:33 +0000 (0:00:06.782) 0:01:34.982 ********** 2026-03-08 01:01:53.278997 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:01:53.279008 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:01:53.279024 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:01:53.279034 | orchestrator | 2026-03-08 01:01:53.279043 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-08 01:01:53.279053 | orchestrator | Sunday 08 March 2026 01:01:41 +0000 (0:00:08.658) 0:01:43.641 ********** 2026-03-08 01:01:53.279062 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:01:53.279072 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:01:53.279083 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:01:53.279122 | orchestrator | 2026-03-08 01:01:53.279134 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:01:53.279145 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-08 01:01:53.279156 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-08 01:01:53.279168 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-08 01:01:53.279178 | orchestrator | 2026-03-08 01:01:53.279188 | orchestrator | 2026-03-08 01:01:53.279199 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:01:53.279212 | orchestrator | Sunday 08 March 2026 01:01:50 +0000 (0:00:08.666) 0:01:52.308 ********** 2026-03-08 01:01:53.279223 | orchestrator | =============================================================================== 2026-03-08 01:01:53.279234 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 14.34s 2026-03-08 01:01:53.279245 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.99s 2026-03-08 01:01:53.279255 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.99s 2026-03-08 01:01:53.279265 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 8.67s 2026-03-08 01:01:53.279276 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 8.66s 2026-03-08 01:01:53.279287 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.09s 2026-03-08 01:01:53.279298 | orchestrator | barbican : Restart barbican-api container ------------------------------- 6.78s 2026-03-08 01:01:53.279309 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.99s 2026-03-08 01:01:53.279320 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.70s 2026-03-08 01:01:53.279331 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.58s 2026-03-08 01:01:53.279341 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.57s 2026-03-08 01:01:53.279352 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.52s 2026-03-08 01:01:53.279362 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.40s 2026-03-08 01:01:53.279373 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.38s 2026-03-08 01:01:53.279383 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.40s 2026-03-08 01:01:53.279399 | orchestrator | barbican : Creating barbican database ----------------------------------- 1.99s 2026-03-08 01:01:53.279409 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.93s 2026-03-08 01:01:53.279419 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 1.90s 2026-03-08 01:01:53.279430 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.71s 2026-03-08 01:01:53.279440 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.41s 2026-03-08 01:01:53.279451 | orchestrator | 2026-03-08 01:01:53 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:01:53.279462 | orchestrator | 2026-03-08 01:01:53 | INFO  | Task c832316a-ddf2-4842-bae9-779bd894b90e is in state STARTED 2026-03-08 01:01:53.279473 | orchestrator | 2026-03-08 01:01:53 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:01:53.279484 | orchestrator | 2026-03-08 01:01:53 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:01:53.279493 | orchestrator | 2026-03-08 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:56.344173 | orchestrator | 2026-03-08 01:01:56 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:01:56.344447 | orchestrator | 2026-03-08 01:01:56 | INFO  | Task c832316a-ddf2-4842-bae9-779bd894b90e is in state STARTED 2026-03-08 01:01:56.345193 | orchestrator | 2026-03-08 01:01:56 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:01:56.345862 | orchestrator | 2026-03-08 01:01:56 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:01:56.345881 | orchestrator | 2026-03-08 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:59.377596 | orchestrator | 2026-03-08 01:01:59 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:01:59.378832 | orchestrator | 2026-03-08 01:01:59 | INFO  | Task c832316a-ddf2-4842-bae9-779bd894b90e is in state STARTED 2026-03-08 01:01:59.379559 | orchestrator | 2026-03-08 01:01:59 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:01:59.380227 | orchestrator | 2026-03-08 01:01:59 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:01:59.380261 | orchestrator | 2026-03-08 01:01:59 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:02.418581 | orchestrator | 2026-03-08 01:02:02 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:02:02.420149 | orchestrator | 2026-03-08 01:02:02 | INFO  | Task c832316a-ddf2-4842-bae9-779bd894b90e is in state STARTED 2026-03-08 01:02:02.421098 | orchestrator | 2026-03-08 01:02:02 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:02:02.422828 | orchestrator | 2026-03-08 01:02:02 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:02:02.422924 | orchestrator | 2026-03-08 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:05.458483 | orchestrator | 2026-03-08 01:02:05 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:02:05.458864 | orchestrator | 2026-03-08 01:02:05 | INFO  | Task c832316a-ddf2-4842-bae9-779bd894b90e is in state STARTED 2026-03-08 01:02:05.458894 | orchestrator | 2026-03-08 01:02:05 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:02:05.459344 | orchestrator | 2026-03-08 01:02:05 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:02:05.459369 | orchestrator | 2026-03-08 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:08.492469 | orchestrator | 2026-03-08 01:02:08 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:02:08.492635 | orchestrator | 2026-03-08 01:02:08 | INFO  | Task c832316a-ddf2-4842-bae9-779bd894b90e is in state STARTED 2026-03-08 01:02:08.493317 | orchestrator | 2026-03-08 01:02:08 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:02:08.493884 | orchestrator | 2026-03-08 01:02:08 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:02:08.493908 | orchestrator | 2026-03-08 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:11.542433 | orchestrator | 2026-03-08 01:02:11 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:02:11.545421 | orchestrator | 2026-03-08 01:02:11 | INFO  | Task c832316a-ddf2-4842-bae9-779bd894b90e is in state STARTED 2026-03-08 01:02:11.549359 | orchestrator | 2026-03-08 01:02:11 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:02:11.549834 | orchestrator | 2026-03-08 01:02:11 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:02:11.549876 | orchestrator | 2026-03-08 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:14.602195 | orchestrator | 2026-03-08 01:02:14 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:02:14.603675 | orchestrator | 2026-03-08 01:02:14 | INFO  | Task c832316a-ddf2-4842-bae9-779bd894b90e is in state STARTED 2026-03-08 01:02:14.606523 | orchestrator | 2026-03-08 01:02:14 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:02:14.608391 | orchestrator | 2026-03-08 01:02:14 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:02:14.608443 | orchestrator | 2026-03-08 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:17.648466 | orchestrator | 2026-03-08 01:02:17 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:02:17.650467 | orchestrator | 2026-03-08 01:02:17 | INFO  | Task c832316a-ddf2-4842-bae9-779bd894b90e is in state STARTED 2026-03-08 01:02:17.652489 | orchestrator | 2026-03-08 01:02:17 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:02:17.654861 | orchestrator | 2026-03-08 01:02:17 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:02:17.654919 | orchestrator | 2026-03-08 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:20.701399 | orchestrator | 2026-03-08 01:02:20 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:02:20.702486 | orchestrator | 2026-03-08 01:02:20 | INFO  | Task c832316a-ddf2-4842-bae9-779bd894b90e is in state STARTED 2026-03-08 01:02:20.704285 | orchestrator | 2026-03-08 01:02:20 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:02:20.705370 | orchestrator | 2026-03-08 01:02:20 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:02:20.705559 | orchestrator | 2026-03-08 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:23.736837 | orchestrator | 2026-03-08 01:02:23 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:02:23.739505 | orchestrator | 2026-03-08 01:02:23 | INFO  | Task c832316a-ddf2-4842-bae9-779bd894b90e is in state STARTED 2026-03-08 01:02:23.743014 | orchestrator | 2026-03-08 01:02:23 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:02:23.745497 | orchestrator | 2026-03-08 01:02:23 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:02:23.745565 | orchestrator | 2026-03-08 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:26.785500 | orchestrator | 2026-03-08 01:02:26 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:02:26.786794 | orchestrator | 2026-03-08 01:02:26 | INFO  | Task c832316a-ddf2-4842-bae9-779bd894b90e is in state STARTED 2026-03-08 01:02:26.788910 | orchestrator | 2026-03-08 01:02:26 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:02:26.790427 | orchestrator | 2026-03-08 01:02:26 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:02:26.790921 | orchestrator | 2026-03-08 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:29.834871 | orchestrator | 2026-03-08 01:02:29 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:02:29.835361 | orchestrator | 2026-03-08 01:02:29 | INFO  | Task c832316a-ddf2-4842-bae9-779bd894b90e is in state STARTED 2026-03-08 01:02:29.836047 | orchestrator | 2026-03-08 01:02:29 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:02:29.837120 | orchestrator | 2026-03-08 01:02:29 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:02:29.837138 | orchestrator | 2026-03-08 01:02:29 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:32.870682 | orchestrator | 2026-03-08 01:02:32 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:02:32.871903 | orchestrator | 2026-03-08 01:02:32 | INFO  | Task c832316a-ddf2-4842-bae9-779bd894b90e is in state STARTED 2026-03-08 01:02:32.874888 | orchestrator | 2026-03-08 01:02:32 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:02:32.876580 | orchestrator | 2026-03-08 01:02:32 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:02:32.876644 | orchestrator | 2026-03-08 01:02:32 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:35.913553 | orchestrator | 2026-03-08 01:02:35 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:02:35.914306 | orchestrator | 2026-03-08 01:02:35 | INFO  | Task c832316a-ddf2-4842-bae9-779bd894b90e is in state STARTED 2026-03-08 01:02:35.915117 | orchestrator | 2026-03-08 01:02:35 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:02:35.915877 | orchestrator | 2026-03-08 01:02:35 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:02:35.916045 | orchestrator | 2026-03-08 01:02:35 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:38.967760 | orchestrator | 2026-03-08 01:02:38 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:02:38.968711 | orchestrator | 2026-03-08 01:02:38 | INFO  | Task c832316a-ddf2-4842-bae9-779bd894b90e is in state STARTED 2026-03-08 01:02:38.970704 | orchestrator | 2026-03-08 01:02:38 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:02:38.973031 | orchestrator | 2026-03-08 01:02:38 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:02:38.973087 | orchestrator | 2026-03-08 01:02:38 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:42.021482 | orchestrator | 2026-03-08 01:02:42 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:02:42.023437 | orchestrator | 2026-03-08 01:02:42 | INFO  | Task c832316a-ddf2-4842-bae9-779bd894b90e is in state STARTED 2026-03-08 01:02:42.025558 | orchestrator | 2026-03-08 01:02:42 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state STARTED 2026-03-08 01:02:42.028111 | orchestrator | 2026-03-08 01:02:42 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:02:42.028166 | orchestrator | 2026-03-08 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:45.069053 | orchestrator | 2026-03-08 01:02:45 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:02:45.070691 | orchestrator | 2026-03-08 01:02:45 | INFO  | Task c832316a-ddf2-4842-bae9-779bd894b90e is in state STARTED 2026-03-08 01:02:45.075864 | orchestrator | 2026-03-08 01:02:45 | INFO  | Task c59fa4d8-36d6-4b7f-a28a-d47a9423ffcb is in state SUCCESS 2026-03-08 01:02:45.078116 | orchestrator | 2026-03-08 01:02:45.078202 | orchestrator | 2026-03-08 01:02:45.078217 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:02:45.078230 | orchestrator | 2026-03-08 01:02:45.078239 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:02:45.078249 | orchestrator | Sunday 08 March 2026 00:59:58 +0000 (0:00:00.324) 0:00:00.324 ********** 2026-03-08 01:02:45.078285 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:02:45.078294 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:02:45.078301 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:02:45.078307 | orchestrator | 2026-03-08 01:02:45.078313 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:02:45.078320 | orchestrator | Sunday 08 March 2026 00:59:59 +0000 (0:00:00.473) 0:00:00.797 ********** 2026-03-08 01:02:45.078327 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-08 01:02:45.078334 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-08 01:02:45.078340 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-08 01:02:45.078347 | orchestrator | 2026-03-08 01:02:45.078354 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-08 01:02:45.078360 | orchestrator | 2026-03-08 01:02:45.078366 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-08 01:02:45.078373 | orchestrator | Sunday 08 March 2026 00:59:59 +0000 (0:00:00.518) 0:00:01.316 ********** 2026-03-08 01:02:45.078379 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:02:45.078387 | orchestrator | 2026-03-08 01:02:45.078393 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-08 01:02:45.078399 | orchestrator | Sunday 08 March 2026 01:00:00 +0000 (0:00:00.745) 0:00:02.061 ********** 2026-03-08 01:02:45.078405 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-08 01:02:45.078411 | orchestrator | 2026-03-08 01:02:45.078417 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-08 01:02:45.078425 | orchestrator | Sunday 08 March 2026 01:00:03 +0000 (0:00:03.511) 0:00:05.573 ********** 2026-03-08 01:02:45.078430 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-08 01:02:45.078437 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-08 01:02:45.078442 | orchestrator | 2026-03-08 01:02:45.078448 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-08 01:02:45.078468 | orchestrator | Sunday 08 March 2026 01:00:10 +0000 (0:00:06.873) 0:00:12.446 ********** 2026-03-08 01:02:45.078474 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-08 01:02:45.078480 | orchestrator | 2026-03-08 01:02:45.078487 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-08 01:02:45.078494 | orchestrator | Sunday 08 March 2026 01:00:14 +0000 (0:00:03.573) 0:00:16.019 ********** 2026-03-08 01:02:45.078501 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-08 01:02:45.078507 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-08 01:02:45.078513 | orchestrator | 2026-03-08 01:02:45.078651 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-08 01:02:45.078661 | orchestrator | Sunday 08 March 2026 01:00:18 +0000 (0:00:04.477) 0:00:20.497 ********** 2026-03-08 01:02:45.078936 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-08 01:02:45.078943 | orchestrator | 2026-03-08 01:02:45.078948 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-08 01:02:45.078952 | orchestrator | Sunday 08 March 2026 01:00:22 +0000 (0:00:03.229) 0:00:23.727 ********** 2026-03-08 01:02:45.078956 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-08 01:02:45.078961 | orchestrator | 2026-03-08 01:02:45.078965 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-08 01:02:45.078969 | orchestrator | Sunday 08 March 2026 01:00:25 +0000 (0:00:03.385) 0:00:27.113 ********** 2026-03-08 01:02:45.078991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:45.079024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:45.079029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:45.079040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079162 | orchestrator | 2026-03-08 01:02:45.079168 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-08 01:02:45.079179 | orchestrator | Sunday 08 March 2026 01:00:27 +0000 (0:00:02.456) 0:00:29.569 ********** 2026-03-08 01:02:45.079184 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:02:45.079191 | orchestrator | 2026-03-08 01:02:45.079197 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-08 01:02:45.079203 | orchestrator | Sunday 08 March 2026 01:00:28 +0000 (0:00:00.119) 0:00:29.689 ********** 2026-03-08 01:02:45.079208 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:02:45.079214 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:02:45.079220 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:02:45.079226 | orchestrator | 2026-03-08 01:02:45.079232 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-08 01:02:45.079238 | orchestrator | Sunday 08 March 2026 01:00:28 +0000 (0:00:00.290) 0:00:29.980 ********** 2026-03-08 01:02:45.079244 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:02:45.079250 | orchestrator | 2026-03-08 01:02:45.079256 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-08 01:02:45.079263 | orchestrator | Sunday 08 March 2026 01:00:28 +0000 (0:00:00.602) 0:00:30.583 ********** 2026-03-08 01:02:45.079274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:45.079281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:45.079292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:45.079300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.079760 | orchestrator | 2026-03-08 01:02:45.079768 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-08 01:02:45.079772 | orchestrator | Sunday 08 March 2026 01:00:34 +0000 (0:00:05.276) 0:00:35.859 ********** 2026-03-08 01:02:45.079777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:45.079797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 01:02:45.079802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.079809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.079819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.079823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.079827 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:02:45.079831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:45.079838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 01:02:45.079842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.079849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.079856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.079860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.079864 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:02:45.079870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:45.079881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 01:02:45.079885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.079892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.079899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.079906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.079912 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:02:45.079918 | orchestrator | 2026-03-08 01:02:45.079927 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-08 01:02:45.079936 | orchestrator | Sunday 08 March 2026 01:00:34 +0000 (0:00:00.760) 0:00:36.620 ********** 2026-03-08 01:02:45.079942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:45.079953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 01:02:45.079959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.079971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.080525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.080577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.080586 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:02:45.080595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:45.080631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 01:02:45.080648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.080655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.080665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.080672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.080678 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:02:45.080684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:45.080710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 01:02:45.080722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.080729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.080739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.080745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.080752 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:02:45.080758 | orchestrator | 2026-03-08 01:02:45.080766 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-08 01:02:45.080772 | orchestrator | Sunday 08 March 2026 01:00:36 +0000 (0:00:01.040) 0:00:37.660 ********** 2026-03-08 01:02:45.080779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:45.080805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:45.080818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:45.080831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:45.080838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:45.080845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:45.080871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.080882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.080889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.080898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.080906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.080913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.080919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.080946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.080951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.080956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.080962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.080966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.080970 | orchestrator | 2026-03-08 01:02:45.080973 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-08 01:02:45.080998 | orchestrator | Sunday 08 March 2026 01:00:42 +0000 (0:00:06.230) 0:00:43.891 ********** 2026-03-08 01:02:45.081003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:45.081023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:45.081027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:45.081034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081132 | orchestrator | 2026-03-08 01:02:45.081138 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-08 01:02:45.081144 | orchestrator | Sunday 08 March 2026 01:01:02 +0000 (0:00:20.432) 0:01:04.324 ********** 2026-03-08 01:02:45.081150 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-08 01:02:45.081156 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-08 01:02:45.081167 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-08 01:02:45.081173 | orchestrator | 2026-03-08 01:02:45.081178 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-08 01:02:45.081183 | orchestrator | Sunday 08 March 2026 01:01:07 +0000 (0:00:04.970) 0:01:09.294 ********** 2026-03-08 01:02:45.081189 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-08 01:02:45.081195 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-08 01:02:45.081201 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-08 01:02:45.081207 | orchestrator | 2026-03-08 01:02:45.081212 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-08 01:02:45.081218 | orchestrator | Sunday 08 March 2026 01:01:11 +0000 (0:00:03.588) 0:01:12.883 ********** 2026-03-08 01:02:45.081232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:45.081239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:45.081249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:45.081255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081361 | orchestrator | 2026-03-08 01:02:45.081365 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-08 01:02:45.081368 | orchestrator | Sunday 08 March 2026 01:01:14 +0000 (0:00:02.805) 0:01:15.689 ********** 2026-03-08 01:02:45.081377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:45.081381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:45.081388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:45.081396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081475 | orchestrator | 2026-03-08 01:02:45.081478 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-08 01:02:45.081482 | orchestrator | Sunday 08 March 2026 01:01:16 +0000 (0:00:02.900) 0:01:18.589 ********** 2026-03-08 01:02:45.081486 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:02:45.081490 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:02:45.081494 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:02:45.081498 | orchestrator | 2026-03-08 01:02:45.081501 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-08 01:02:45.081505 | orchestrator | Sunday 08 March 2026 01:01:17 +0000 (0:00:00.383) 0:01:18.972 ********** 2026-03-08 01:02:45.081513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:45.081517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 01:02:45.081523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:45.081530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 01:02:45.081538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081575 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:02:45.081579 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:02:45.081586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:45.081590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 01:02:45.081597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:02:45.081615 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:02:45.081619 | orchestrator | 2026-03-08 01:02:45.081623 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-08 01:02:45.081626 | orchestrator | Sunday 08 March 2026 01:01:18 +0000 (0:00:01.426) 0:01:20.399 ********** 2026-03-08 01:02:45.081634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:45.081639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:45.081650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:45.081654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:45.081732 | orchestrator | 2026-03-08 01:02:45.081736 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-08 01:02:45.081740 | orchestrator | Sunday 08 March 2026 01:01:23 +0000 (0:00:04.885) 0:01:25.284 ********** 2026-03-08 01:02:45.081744 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:02:45.081748 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:02:45.081751 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:02:45.081755 | orchestrator | 2026-03-08 01:02:45.081759 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-08 01:02:45.081763 | orchestrator | Sunday 08 March 2026 01:01:24 +0000 (0:00:00.560) 0:01:25.845 ********** 2026-03-08 01:02:45.081767 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-08 01:02:45.081771 | orchestrator | 2026-03-08 01:02:45.081774 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-08 01:02:45.081782 | orchestrator | Sunday 08 March 2026 01:01:26 +0000 (0:00:02.093) 0:01:27.938 ********** 2026-03-08 01:02:45.081785 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-08 01:02:45.081789 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-08 01:02:45.081793 | orchestrator | 2026-03-08 01:02:45.081797 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-08 01:02:45.081803 | orchestrator | Sunday 08 March 2026 01:01:28 +0000 (0:00:02.377) 0:01:30.315 ********** 2026-03-08 01:02:45.081807 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:02:45.081811 | orchestrator | 2026-03-08 01:02:45.081815 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-08 01:02:45.081819 | orchestrator | Sunday 08 March 2026 01:01:44 +0000 (0:00:16.094) 0:01:46.410 ********** 2026-03-08 01:02:45.081823 | orchestrator | 2026-03-08 01:02:45.081826 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-08 01:02:45.081830 | orchestrator | Sunday 08 March 2026 01:01:44 +0000 (0:00:00.057) 0:01:46.468 ********** 2026-03-08 01:02:45.081834 | orchestrator | 2026-03-08 01:02:45.081838 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-08 01:02:45.081842 | orchestrator | Sunday 08 March 2026 01:01:44 +0000 (0:00:00.076) 0:01:46.544 ********** 2026-03-08 01:02:45.081846 | orchestrator | 2026-03-08 01:02:45.081850 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-08 01:02:45.081855 | orchestrator | Sunday 08 March 2026 01:01:44 +0000 (0:00:00.061) 0:01:46.605 ********** 2026-03-08 01:02:45.081861 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:02:45.081866 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:02:45.081875 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:02:45.081883 | orchestrator | 2026-03-08 01:02:45.081889 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-08 01:02:45.081895 | orchestrator | Sunday 08 March 2026 01:01:52 +0000 (0:00:07.996) 0:01:54.602 ********** 2026-03-08 01:02:45.081901 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:02:45.081907 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:02:45.081913 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:02:45.081918 | orchestrator | 2026-03-08 01:02:45.081924 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-08 01:02:45.081930 | orchestrator | Sunday 08 March 2026 01:02:03 +0000 (0:00:10.118) 0:02:04.720 ********** 2026-03-08 01:02:45.081936 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:02:45.081942 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:02:45.081947 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:02:45.081953 | orchestrator | 2026-03-08 01:02:45.081958 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-08 01:02:45.081964 | orchestrator | Sunday 08 March 2026 01:02:09 +0000 (0:00:06.716) 0:02:11.437 ********** 2026-03-08 01:02:45.081970 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:02:45.081976 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:02:45.082002 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:02:45.082008 | orchestrator | 2026-03-08 01:02:45.082069 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-08 01:02:45.082083 | orchestrator | Sunday 08 March 2026 01:02:21 +0000 (0:00:11.639) 0:02:23.076 ********** 2026-03-08 01:02:45.082089 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:02:45.082095 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:02:45.082101 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:02:45.082108 | orchestrator | 2026-03-08 01:02:45.082114 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-08 01:02:45.082120 | orchestrator | Sunday 08 March 2026 01:02:30 +0000 (0:00:08.621) 0:02:31.697 ********** 2026-03-08 01:02:45.082126 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:02:45.082130 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:02:45.082134 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:02:45.082145 | orchestrator | 2026-03-08 01:02:45.082148 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-08 01:02:45.082153 | orchestrator | Sunday 08 March 2026 01:02:35 +0000 (0:00:05.867) 0:02:37.565 ********** 2026-03-08 01:02:45.082160 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:02:45.082165 | orchestrator | 2026-03-08 01:02:45.082171 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:02:45.082177 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-08 01:02:45.082185 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-08 01:02:45.082191 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-08 01:02:45.082197 | orchestrator | 2026-03-08 01:02:45.082202 | orchestrator | 2026-03-08 01:02:45.082208 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:02:45.082214 | orchestrator | Sunday 08 March 2026 01:02:43 +0000 (0:00:07.301) 0:02:44.866 ********** 2026-03-08 01:02:45.082220 | orchestrator | =============================================================================== 2026-03-08 01:02:45.082225 | orchestrator | designate : Copying over designate.conf -------------------------------- 20.43s 2026-03-08 01:02:45.082231 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.09s 2026-03-08 01:02:45.082237 | orchestrator | designate : Restart designate-producer container ----------------------- 11.64s 2026-03-08 01:02:45.082243 | orchestrator | designate : Restart designate-api container ---------------------------- 10.12s 2026-03-08 01:02:45.082248 | orchestrator | designate : Restart designate-mdns container ---------------------------- 8.62s 2026-03-08 01:02:45.082254 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.00s 2026-03-08 01:02:45.082259 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.30s 2026-03-08 01:02:45.082265 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.87s 2026-03-08 01:02:45.082272 | orchestrator | designate : Restart designate-central container ------------------------- 6.72s 2026-03-08 01:02:45.082278 | orchestrator | designate : Copying over config.json files for services ----------------- 6.23s 2026-03-08 01:02:45.082292 | orchestrator | designate : Restart designate-worker container -------------------------- 5.87s 2026-03-08 01:02:45.082298 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.28s 2026-03-08 01:02:45.082305 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.97s 2026-03-08 01:02:45.082311 | orchestrator | designate : Check designate containers ---------------------------------- 4.89s 2026-03-08 01:02:45.082317 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.48s 2026-03-08 01:02:45.082323 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.59s 2026-03-08 01:02:45.082328 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.57s 2026-03-08 01:02:45.082334 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.51s 2026-03-08 01:02:45.082340 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.39s 2026-03-08 01:02:45.082346 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.23s 2026-03-08 01:02:45.082353 | orchestrator | 2026-03-08 01:02:45 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:02:45.082359 | orchestrator | 2026-03-08 01:02:45 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:02:45.082366 | orchestrator | 2026-03-08 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:48.130380 | orchestrator | 2026-03-08 01:02:48 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:02:48.130494 | orchestrator | 2026-03-08 01:02:48 | INFO  | Task c832316a-ddf2-4842-bae9-779bd894b90e is in state STARTED 2026-03-08 01:02:48.131145 | orchestrator | 2026-03-08 01:02:48 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:02:48.132282 | orchestrator | 2026-03-08 01:02:48 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:02:48.132313 | orchestrator | 2026-03-08 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:51.183289 | orchestrator | 2026-03-08 01:02:51 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:02:51.185325 | orchestrator | 2026-03-08 01:02:51 | INFO  | Task c832316a-ddf2-4842-bae9-779bd894b90e is in state STARTED 2026-03-08 01:02:51.186793 | orchestrator | 2026-03-08 01:02:51 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:02:51.188486 | orchestrator | 2026-03-08 01:02:51 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:02:51.188591 | orchestrator | 2026-03-08 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:54.230271 | orchestrator | 2026-03-08 01:02:54 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:02:54.233192 | orchestrator | 2026-03-08 01:02:54 | INFO  | Task c832316a-ddf2-4842-bae9-779bd894b90e is in state STARTED 2026-03-08 01:02:54.236416 | orchestrator | 2026-03-08 01:02:54 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:02:54.239511 | orchestrator | 2026-03-08 01:02:54 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:02:54.240232 | orchestrator | 2026-03-08 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:57.279486 | orchestrator | 2026-03-08 01:02:57 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:02:57.279996 | orchestrator | 2026-03-08 01:02:57 | INFO  | Task c832316a-ddf2-4842-bae9-779bd894b90e is in state STARTED 2026-03-08 01:02:57.280499 | orchestrator | 2026-03-08 01:02:57 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:02:57.281113 | orchestrator | 2026-03-08 01:02:57 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:02:57.281148 | orchestrator | 2026-03-08 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:00.321525 | orchestrator | 2026-03-08 01:03:00 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:03:00.324775 | orchestrator | 2026-03-08 01:03:00 | INFO  | Task c832316a-ddf2-4842-bae9-779bd894b90e is in state STARTED 2026-03-08 01:03:00.326006 | orchestrator | 2026-03-08 01:03:00 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:03:00.329087 | orchestrator | 2026-03-08 01:03:00 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:03:00.329124 | orchestrator | 2026-03-08 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:03.376214 | orchestrator | 2026-03-08 01:03:03 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:03:03.377068 | orchestrator | 2026-03-08 01:03:03 | INFO  | Task c832316a-ddf2-4842-bae9-779bd894b90e is in state STARTED 2026-03-08 01:03:03.378969 | orchestrator | 2026-03-08 01:03:03 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:03:03.382218 | orchestrator | 2026-03-08 01:03:03 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:03:03.382293 | orchestrator | 2026-03-08 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:06.421040 | orchestrator | 2026-03-08 01:03:06 | INFO  | Task ee214069-b9f4-4eab-97f6-4b5f95787e86 is in state STARTED 2026-03-08 01:03:06.421740 | orchestrator | 2026-03-08 01:03:06 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:03:06.423230 | orchestrator | 2026-03-08 01:03:06 | INFO  | Task c832316a-ddf2-4842-bae9-779bd894b90e is in state SUCCESS 2026-03-08 01:03:06.424491 | orchestrator | 2026-03-08 01:03:06.424528 | orchestrator | 2026-03-08 01:03:06.424537 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:03:06.424545 | orchestrator | 2026-03-08 01:03:06.424551 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:03:06.424557 | orchestrator | Sunday 08 March 2026 01:01:56 +0000 (0:00:00.483) 0:00:00.483 ********** 2026-03-08 01:03:06.424563 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:03:06.424575 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:03:06.424587 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:03:06.424603 | orchestrator | 2026-03-08 01:03:06.424612 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:03:06.424621 | orchestrator | Sunday 08 March 2026 01:01:57 +0000 (0:00:00.329) 0:00:00.813 ********** 2026-03-08 01:03:06.424631 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-08 01:03:06.424641 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-08 01:03:06.424650 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-08 01:03:06.424660 | orchestrator | 2026-03-08 01:03:06.424671 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-08 01:03:06.424681 | orchestrator | 2026-03-08 01:03:06.424693 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-08 01:03:06.424704 | orchestrator | Sunday 08 March 2026 01:01:57 +0000 (0:00:00.572) 0:00:01.385 ********** 2026-03-08 01:03:06.424721 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:03:06.424727 | orchestrator | 2026-03-08 01:03:06.424733 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-08 01:03:06.424739 | orchestrator | Sunday 08 March 2026 01:01:58 +0000 (0:00:00.517) 0:00:01.903 ********** 2026-03-08 01:03:06.424745 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-08 01:03:06.424754 | orchestrator | 2026-03-08 01:03:06.424767 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-08 01:03:06.424779 | orchestrator | Sunday 08 March 2026 01:02:01 +0000 (0:00:03.166) 0:00:05.069 ********** 2026-03-08 01:03:06.424789 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-08 01:03:06.424799 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-08 01:03:06.424809 | orchestrator | 2026-03-08 01:03:06.424819 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-08 01:03:06.424829 | orchestrator | Sunday 08 March 2026 01:02:07 +0000 (0:00:05.825) 0:00:10.895 ********** 2026-03-08 01:03:06.424842 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-08 01:03:06.424855 | orchestrator | 2026-03-08 01:03:06.424865 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-08 01:03:06.424874 | orchestrator | Sunday 08 March 2026 01:02:10 +0000 (0:00:02.967) 0:00:13.862 ********** 2026-03-08 01:03:06.424884 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-08 01:03:06.424893 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-08 01:03:06.424903 | orchestrator | 2026-03-08 01:03:06.424911 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-08 01:03:06.424984 | orchestrator | Sunday 08 March 2026 01:02:14 +0000 (0:00:04.265) 0:00:18.128 ********** 2026-03-08 01:03:06.424997 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-08 01:03:06.425006 | orchestrator | 2026-03-08 01:03:06.425012 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-08 01:03:06.425017 | orchestrator | Sunday 08 March 2026 01:02:18 +0000 (0:00:03.478) 0:00:21.606 ********** 2026-03-08 01:03:06.425023 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-08 01:03:06.425029 | orchestrator | 2026-03-08 01:03:06.425035 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-08 01:03:06.425040 | orchestrator | Sunday 08 March 2026 01:02:21 +0000 (0:00:03.636) 0:00:25.242 ********** 2026-03-08 01:03:06.425046 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:06.425052 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:06.425058 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:06.425063 | orchestrator | 2026-03-08 01:03:06.425069 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-08 01:03:06.425075 | orchestrator | Sunday 08 March 2026 01:02:22 +0000 (0:00:00.299) 0:00:25.542 ********** 2026-03-08 01:03:06.425083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:06.425103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:06.425115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:06.425122 | orchestrator | 2026-03-08 01:03:06.425135 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-08 01:03:06.425142 | orchestrator | Sunday 08 March 2026 01:02:23 +0000 (0:00:01.198) 0:00:26.741 ********** 2026-03-08 01:03:06.425149 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:06.425156 | orchestrator | 2026-03-08 01:03:06.425164 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-08 01:03:06.425171 | orchestrator | Sunday 08 March 2026 01:02:23 +0000 (0:00:00.147) 0:00:26.889 ********** 2026-03-08 01:03:06.425178 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:06.425185 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:06.425192 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:06.425199 | orchestrator | 2026-03-08 01:03:06.425207 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-08 01:03:06.425213 | orchestrator | Sunday 08 March 2026 01:02:23 +0000 (0:00:00.579) 0:00:27.468 ********** 2026-03-08 01:03:06.425221 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:03:06.425228 | orchestrator | 2026-03-08 01:03:06.425235 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-08 01:03:06.425241 | orchestrator | Sunday 08 March 2026 01:02:24 +0000 (0:00:00.555) 0:00:28.024 ********** 2026-03-08 01:03:06.425249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:06.425261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:06.425272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:06.425284 | orchestrator | 2026-03-08 01:03:06.425292 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-08 01:03:06.425299 | orchestrator | Sunday 08 March 2026 01:02:25 +0000 (0:00:01.466) 0:00:29.491 ********** 2026-03-08 01:03:06.425306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-08 01:03:06.425313 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:06.425321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-08 01:03:06.425328 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:06.425339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-08 01:03:06.425347 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:06.425354 | orchestrator | 2026-03-08 01:03:06.425362 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-08 01:03:06.425369 | orchestrator | Sunday 08 March 2026 01:02:26 +0000 (0:00:00.915) 0:00:30.407 ********** 2026-03-08 01:03:06.425379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-08 01:03:06.425390 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:06.425397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-08 01:03:06.425406 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:06.425417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-08 01:03:06.425434 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:06.425443 | orchestrator | 2026-03-08 01:03:06.425452 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-08 01:03:06.425462 | orchestrator | Sunday 08 March 2026 01:02:27 +0000 (0:00:00.750) 0:00:31.157 ********** 2026-03-08 01:03:06.425479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:06.425493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:06.425510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:06.425521 | orchestrator | 2026-03-08 01:03:06.425531 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-08 01:03:06.425541 | orchestrator | Sunday 08 March 2026 01:02:28 +0000 (0:00:01.341) 0:00:32.498 ********** 2026-03-08 01:03:06.425552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:06.425562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:06.425574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:06.425585 | orchestrator | 2026-03-08 01:03:06.425591 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-08 01:03:06.425599 | orchestrator | Sunday 08 March 2026 01:02:31 +0000 (0:00:02.564) 0:00:35.062 ********** 2026-03-08 01:03:06.425605 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-08 01:03:06.425611 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-08 01:03:06.425617 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-08 01:03:06.425622 | orchestrator | 2026-03-08 01:03:06.425628 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-08 01:03:06.425634 | orchestrator | Sunday 08 March 2026 01:02:32 +0000 (0:00:01.369) 0:00:36.432 ********** 2026-03-08 01:03:06.425640 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:03:06.425646 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:03:06.425652 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:03:06.425657 | orchestrator | 2026-03-08 01:03:06.425663 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-08 01:03:06.425669 | orchestrator | Sunday 08 March 2026 01:02:34 +0000 (0:00:01.216) 0:00:37.649 ********** 2026-03-08 01:03:06.425675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-08 01:03:06.425681 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:06.425687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-08 01:03:06.425693 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:06.425703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-08 01:03:06.425719 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:06.425725 | orchestrator | 2026-03-08 01:03:06.425731 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-08 01:03:06.425737 | orchestrator | Sunday 08 March 2026 01:02:34 +0000 (0:00:00.450) 0:00:38.099 ********** 2026-03-08 01:03:06.425745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:06.425751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:06.425762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:06.425773 | orchestrator | 2026-03-08 01:03:06.425786 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-08 01:03:06.425805 | orchestrator | Sunday 08 March 2026 01:02:35 +0000 (0:00:00.997) 0:00:39.096 ********** 2026-03-08 01:03:06.425815 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:03:06.425825 | orchestrator | 2026-03-08 01:03:06.425835 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-08 01:03:06.425845 | orchestrator | Sunday 08 March 2026 01:02:38 +0000 (0:00:02.621) 0:00:41.718 ********** 2026-03-08 01:03:06.425855 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:03:06.425862 | orchestrator | 2026-03-08 01:03:06.425867 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-08 01:03:06.425873 | orchestrator | Sunday 08 March 2026 01:02:40 +0000 (0:00:02.405) 0:00:44.124 ********** 2026-03-08 01:03:06.425884 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:03:06.425890 | orchestrator | 2026-03-08 01:03:06.425895 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-08 01:03:06.425901 | orchestrator | Sunday 08 March 2026 01:02:55 +0000 (0:00:15.301) 0:00:59.426 ********** 2026-03-08 01:03:06.425907 | orchestrator | 2026-03-08 01:03:06.425913 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-08 01:03:06.425918 | orchestrator | Sunday 08 March 2026 01:02:55 +0000 (0:00:00.065) 0:00:59.491 ********** 2026-03-08 01:03:06.425924 | orchestrator | 2026-03-08 01:03:06.425930 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-08 01:03:06.425957 | orchestrator | Sunday 08 March 2026 01:02:56 +0000 (0:00:00.066) 0:00:59.558 ********** 2026-03-08 01:03:06.425963 | orchestrator | 2026-03-08 01:03:06.425969 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-08 01:03:06.425975 | orchestrator | Sunday 08 March 2026 01:02:56 +0000 (0:00:00.066) 0:00:59.625 ********** 2026-03-08 01:03:06.425981 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:03:06.425986 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:03:06.425992 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:03:06.425998 | orchestrator | 2026-03-08 01:03:06.426004 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:03:06.426011 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-08 01:03:06.426060 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-08 01:03:06.426070 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-08 01:03:06.426080 | orchestrator | 2026-03-08 01:03:06.426090 | orchestrator | 2026-03-08 01:03:06.426100 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:03:06.426111 | orchestrator | Sunday 08 March 2026 01:03:03 +0000 (0:00:07.145) 0:01:06.770 ********** 2026-03-08 01:03:06.426119 | orchestrator | =============================================================================== 2026-03-08 01:03:06.426124 | orchestrator | placement : Running placement bootstrap container ---------------------- 15.30s 2026-03-08 01:03:06.426130 | orchestrator | placement : Restart placement-api container ----------------------------- 7.15s 2026-03-08 01:03:06.426136 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 5.83s 2026-03-08 01:03:06.426142 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.27s 2026-03-08 01:03:06.426147 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.64s 2026-03-08 01:03:06.426154 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.48s 2026-03-08 01:03:06.426163 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.17s 2026-03-08 01:03:06.426176 | orchestrator | service-ks-register : placement | Creating projects --------------------- 2.97s 2026-03-08 01:03:06.426188 | orchestrator | placement : Creating placement databases -------------------------------- 2.62s 2026-03-08 01:03:06.426204 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.56s 2026-03-08 01:03:06.426213 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.41s 2026-03-08 01:03:06.426223 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.47s 2026-03-08 01:03:06.426233 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.37s 2026-03-08 01:03:06.426242 | orchestrator | placement : Copying over config.json files for services ----------------- 1.34s 2026-03-08 01:03:06.426250 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.22s 2026-03-08 01:03:06.426259 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.20s 2026-03-08 01:03:06.426270 | orchestrator | placement : Check placement containers ---------------------------------- 1.00s 2026-03-08 01:03:06.426279 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.92s 2026-03-08 01:03:06.426289 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.75s 2026-03-08 01:03:06.426298 | orchestrator | placement : Set placement policy file ----------------------------------- 0.58s 2026-03-08 01:03:06.426307 | orchestrator | 2026-03-08 01:03:06 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:03:06.426476 | orchestrator | 2026-03-08 01:03:06 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:03:06.426571 | orchestrator | 2026-03-08 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:09.456021 | orchestrator | 2026-03-08 01:03:09 | INFO  | Task ee214069-b9f4-4eab-97f6-4b5f95787e86 is in state STARTED 2026-03-08 01:03:09.457606 | orchestrator | 2026-03-08 01:03:09 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:03:09.458489 | orchestrator | 2026-03-08 01:03:09 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:03:09.459246 | orchestrator | 2026-03-08 01:03:09 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:03:09.459377 | orchestrator | 2026-03-08 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:12.501444 | orchestrator | 2026-03-08 01:03:12 | INFO  | Task ee214069-b9f4-4eab-97f6-4b5f95787e86 is in state SUCCESS 2026-03-08 01:03:12.504450 | orchestrator | 2026-03-08 01:03:12 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:03:12.506184 | orchestrator | 2026-03-08 01:03:12 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:03:12.507800 | orchestrator | 2026-03-08 01:03:12 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:03:12.509484 | orchestrator | 2026-03-08 01:03:12 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:03:12.509517 | orchestrator | 2026-03-08 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:15.558489 | orchestrator | 2026-03-08 01:03:15 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:03:15.559574 | orchestrator | 2026-03-08 01:03:15 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:03:15.561485 | orchestrator | 2026-03-08 01:03:15 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:03:15.563007 | orchestrator | 2026-03-08 01:03:15 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:03:15.563051 | orchestrator | 2026-03-08 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:18.598771 | orchestrator | 2026-03-08 01:03:18 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:03:18.599901 | orchestrator | 2026-03-08 01:03:18 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:03:18.604011 | orchestrator | 2026-03-08 01:03:18 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:03:18.605292 | orchestrator | 2026-03-08 01:03:18 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:03:18.605334 | orchestrator | 2026-03-08 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:21.646150 | orchestrator | 2026-03-08 01:03:21 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:03:21.647316 | orchestrator | 2026-03-08 01:03:21 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:03:21.649418 | orchestrator | 2026-03-08 01:03:21 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:03:21.652297 | orchestrator | 2026-03-08 01:03:21 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:03:21.652341 | orchestrator | 2026-03-08 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:24.686357 | orchestrator | 2026-03-08 01:03:24 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:03:24.691049 | orchestrator | 2026-03-08 01:03:24 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:03:24.691549 | orchestrator | 2026-03-08 01:03:24 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:03:24.693756 | orchestrator | 2026-03-08 01:03:24 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:03:24.695175 | orchestrator | 2026-03-08 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:27.745728 | orchestrator | 2026-03-08 01:03:27 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:03:27.747325 | orchestrator | 2026-03-08 01:03:27 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:03:27.749556 | orchestrator | 2026-03-08 01:03:27 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:03:27.751274 | orchestrator | 2026-03-08 01:03:27 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:03:27.751317 | orchestrator | 2026-03-08 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:30.780398 | orchestrator | 2026-03-08 01:03:30 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:03:30.780759 | orchestrator | 2026-03-08 01:03:30 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:03:30.781581 | orchestrator | 2026-03-08 01:03:30 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:03:30.782502 | orchestrator | 2026-03-08 01:03:30 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:03:30.782527 | orchestrator | 2026-03-08 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:33.821941 | orchestrator | 2026-03-08 01:03:33 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:03:33.822477 | orchestrator | 2026-03-08 01:03:33 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:03:33.823334 | orchestrator | 2026-03-08 01:03:33 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:03:33.824279 | orchestrator | 2026-03-08 01:03:33 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:03:33.824313 | orchestrator | 2026-03-08 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:36.872075 | orchestrator | 2026-03-08 01:03:36 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:03:36.873233 | orchestrator | 2026-03-08 01:03:36 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:03:36.874384 | orchestrator | 2026-03-08 01:03:36 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:03:36.876067 | orchestrator | 2026-03-08 01:03:36 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:03:36.876118 | orchestrator | 2026-03-08 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:39.899221 | orchestrator | 2026-03-08 01:03:39 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:03:39.899279 | orchestrator | 2026-03-08 01:03:39 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:03:39.899288 | orchestrator | 2026-03-08 01:03:39 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:03:39.899295 | orchestrator | 2026-03-08 01:03:39 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:03:39.899315 | orchestrator | 2026-03-08 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:42.932006 | orchestrator | 2026-03-08 01:03:42 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:03:42.932054 | orchestrator | 2026-03-08 01:03:42 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:03:42.932059 | orchestrator | 2026-03-08 01:03:42 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:03:42.932064 | orchestrator | 2026-03-08 01:03:42 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:03:42.932068 | orchestrator | 2026-03-08 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:45.965746 | orchestrator | 2026-03-08 01:03:45 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:03:45.966194 | orchestrator | 2026-03-08 01:03:45 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:03:45.966715 | orchestrator | 2026-03-08 01:03:45 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:03:45.967444 | orchestrator | 2026-03-08 01:03:45 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:03:45.967466 | orchestrator | 2026-03-08 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:48.985610 | orchestrator | 2026-03-08 01:03:48 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:03:48.985762 | orchestrator | 2026-03-08 01:03:48 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:03:48.986351 | orchestrator | 2026-03-08 01:03:48 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:03:48.986926 | orchestrator | 2026-03-08 01:03:48 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:03:48.986946 | orchestrator | 2026-03-08 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:52.226052 | orchestrator | 2026-03-08 01:03:52 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:03:52.226105 | orchestrator | 2026-03-08 01:03:52 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:03:52.226111 | orchestrator | 2026-03-08 01:03:52 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:03:52.226129 | orchestrator | 2026-03-08 01:03:52 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:03:52.226134 | orchestrator | 2026-03-08 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:55.036359 | orchestrator | 2026-03-08 01:03:55 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:03:55.037588 | orchestrator | 2026-03-08 01:03:55 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:03:55.038338 | orchestrator | 2026-03-08 01:03:55 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:03:55.039029 | orchestrator | 2026-03-08 01:03:55 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:03:55.039056 | orchestrator | 2026-03-08 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:58.070245 | orchestrator | 2026-03-08 01:03:58 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:03:58.072615 | orchestrator | 2026-03-08 01:03:58 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:03:58.074113 | orchestrator | 2026-03-08 01:03:58 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:03:58.076063 | orchestrator | 2026-03-08 01:03:58 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:03:58.076120 | orchestrator | 2026-03-08 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:01.109155 | orchestrator | 2026-03-08 01:04:01 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:04:01.109233 | orchestrator | 2026-03-08 01:04:01 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:04:01.110057 | orchestrator | 2026-03-08 01:04:01 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:04:01.110954 | orchestrator | 2026-03-08 01:04:01 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:04:01.110991 | orchestrator | 2026-03-08 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:04.152547 | orchestrator | 2026-03-08 01:04:04 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:04:04.154196 | orchestrator | 2026-03-08 01:04:04 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:04:04.156426 | orchestrator | 2026-03-08 01:04:04 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:04:04.158280 | orchestrator | 2026-03-08 01:04:04 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:04:04.158321 | orchestrator | 2026-03-08 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:07.208078 | orchestrator | 2026-03-08 01:04:07 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:04:07.209836 | orchestrator | 2026-03-08 01:04:07 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:04:07.213006 | orchestrator | 2026-03-08 01:04:07 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:04:07.214643 | orchestrator | 2026-03-08 01:04:07 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:04:07.215239 | orchestrator | 2026-03-08 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:10.262580 | orchestrator | 2026-03-08 01:04:10 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:04:10.262673 | orchestrator | 2026-03-08 01:04:10 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:04:10.262679 | orchestrator | 2026-03-08 01:04:10 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:04:10.262684 | orchestrator | 2026-03-08 01:04:10 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:04:10.262688 | orchestrator | 2026-03-08 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:13.303667 | orchestrator | 2026-03-08 01:04:13 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:04:13.304973 | orchestrator | 2026-03-08 01:04:13 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:04:13.307078 | orchestrator | 2026-03-08 01:04:13 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:04:13.309060 | orchestrator | 2026-03-08 01:04:13 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:04:13.309115 | orchestrator | 2026-03-08 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:16.358205 | orchestrator | 2026-03-08 01:04:16 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:04:16.358742 | orchestrator | 2026-03-08 01:04:16 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:04:16.359426 | orchestrator | 2026-03-08 01:04:16 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:04:16.360196 | orchestrator | 2026-03-08 01:04:16 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:04:16.360221 | orchestrator | 2026-03-08 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:19.391865 | orchestrator | 2026-03-08 01:04:19 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:04:19.392897 | orchestrator | 2026-03-08 01:04:19 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:04:19.394396 | orchestrator | 2026-03-08 01:04:19 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:04:19.396013 | orchestrator | 2026-03-08 01:04:19 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:04:19.396120 | orchestrator | 2026-03-08 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:22.450700 | orchestrator | 2026-03-08 01:04:22 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:04:22.452711 | orchestrator | 2026-03-08 01:04:22 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:04:22.454570 | orchestrator | 2026-03-08 01:04:22 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:04:22.456980 | orchestrator | 2026-03-08 01:04:22 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:04:22.457036 | orchestrator | 2026-03-08 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:25.504034 | orchestrator | 2026-03-08 01:04:25 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:04:25.506331 | orchestrator | 2026-03-08 01:04:25 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:04:25.507877 | orchestrator | 2026-03-08 01:04:25 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:04:25.509545 | orchestrator | 2026-03-08 01:04:25 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:04:25.509588 | orchestrator | 2026-03-08 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:28.547577 | orchestrator | 2026-03-08 01:04:28 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:04:28.550276 | orchestrator | 2026-03-08 01:04:28 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:04:28.552049 | orchestrator | 2026-03-08 01:04:28 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:04:28.555048 | orchestrator | 2026-03-08 01:04:28 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:04:28.555108 | orchestrator | 2026-03-08 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:31.597134 | orchestrator | 2026-03-08 01:04:31 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:04:31.597305 | orchestrator | 2026-03-08 01:04:31 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:04:31.598422 | orchestrator | 2026-03-08 01:04:31 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:04:31.600006 | orchestrator | 2026-03-08 01:04:31 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:04:31.600045 | orchestrator | 2026-03-08 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:34.637354 | orchestrator | 2026-03-08 01:04:34 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:04:34.638298 | orchestrator | 2026-03-08 01:04:34 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:04:34.640654 | orchestrator | 2026-03-08 01:04:34 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:04:34.641845 | orchestrator | 2026-03-08 01:04:34 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:04:34.641952 | orchestrator | 2026-03-08 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:37.682722 | orchestrator | 2026-03-08 01:04:37 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:04:37.682808 | orchestrator | 2026-03-08 01:04:37 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:04:37.684377 | orchestrator | 2026-03-08 01:04:37 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:04:37.685206 | orchestrator | 2026-03-08 01:04:37 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state STARTED 2026-03-08 01:04:37.685237 | orchestrator | 2026-03-08 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:40.742474 | orchestrator | 2026-03-08 01:04:40 | INFO  | Task ff905af9-8d54-4aa6-91b9-b9d4820477cf is in state STARTED 2026-03-08 01:04:40.744974 | orchestrator | 2026-03-08 01:04:40 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:04:40.748407 | orchestrator | 2026-03-08 01:04:40 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:04:40.748657 | orchestrator | 2026-03-08 01:04:40 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:04:40.751460 | orchestrator | 2026-03-08 01:04:40.751547 | orchestrator | 2026-03-08 01:04:40.751560 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:04:40.751569 | orchestrator | 2026-03-08 01:04:40.751590 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:04:40.751597 | orchestrator | Sunday 08 March 2026 01:03:08 +0000 (0:00:00.207) 0:00:00.207 ********** 2026-03-08 01:04:40.751605 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:04:40.751613 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:04:40.751639 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:04:40.751647 | orchestrator | 2026-03-08 01:04:40.751654 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:04:40.751661 | orchestrator | Sunday 08 March 2026 01:03:08 +0000 (0:00:00.324) 0:00:00.531 ********** 2026-03-08 01:04:40.751668 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-08 01:04:40.751676 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-08 01:04:40.751683 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-08 01:04:40.751690 | orchestrator | 2026-03-08 01:04:40.751719 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-08 01:04:40.751726 | orchestrator | 2026-03-08 01:04:40.751823 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-08 01:04:40.751831 | orchestrator | Sunday 08 March 2026 01:03:09 +0000 (0:00:00.630) 0:00:01.162 ********** 2026-03-08 01:04:40.751839 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:04:40.751847 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:04:40.751854 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:04:40.751861 | orchestrator | 2026-03-08 01:04:40.751868 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:04:40.751876 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:04:40.751884 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:04:40.751891 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:04:40.751898 | orchestrator | 2026-03-08 01:04:40.751905 | orchestrator | 2026-03-08 01:04:40.751912 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:04:40.751919 | orchestrator | Sunday 08 March 2026 01:03:10 +0000 (0:00:00.760) 0:00:01.923 ********** 2026-03-08 01:04:40.751927 | orchestrator | =============================================================================== 2026-03-08 01:04:40.751934 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.76s 2026-03-08 01:04:40.751941 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2026-03-08 01:04:40.751948 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-03-08 01:04:40.751955 | orchestrator | 2026-03-08 01:04:40.751962 | orchestrator | 2026-03-08 01:04:40 | INFO  | Task 2a392246-0a48-4a34-8ae4-d624cd2f9905 is in state SUCCESS 2026-03-08 01:04:40.752386 | orchestrator | 2026-03-08 01:04:40.752428 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:04:40.752437 | orchestrator | 2026-03-08 01:04:40.752445 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:04:40.752452 | orchestrator | Sunday 08 March 2026 00:59:58 +0000 (0:00:00.335) 0:00:00.335 ********** 2026-03-08 01:04:40.752459 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:04:40.752466 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:04:40.752474 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:04:40.752481 | orchestrator | ok: [testbed-node-3] 2026-03-08 01:04:40.752488 | orchestrator | ok: [testbed-node-4] 2026-03-08 01:04:40.752495 | orchestrator | ok: [testbed-node-5] 2026-03-08 01:04:40.752528 | orchestrator | 2026-03-08 01:04:40.752537 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:04:40.752544 | orchestrator | Sunday 08 March 2026 00:59:59 +0000 (0:00:00.927) 0:00:01.262 ********** 2026-03-08 01:04:40.752551 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-08 01:04:40.752558 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-08 01:04:40.752566 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-08 01:04:40.752573 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-08 01:04:40.752599 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-08 01:04:40.752616 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-08 01:04:40.752623 | orchestrator | 2026-03-08 01:04:40.752630 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-08 01:04:40.752637 | orchestrator | 2026-03-08 01:04:40.752644 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-08 01:04:40.752652 | orchestrator | Sunday 08 March 2026 01:00:00 +0000 (0:00:00.650) 0:00:01.913 ********** 2026-03-08 01:04:40.752660 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 01:04:40.752667 | orchestrator | 2026-03-08 01:04:40.752675 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-08 01:04:40.752682 | orchestrator | Sunday 08 March 2026 01:00:01 +0000 (0:00:00.988) 0:00:02.901 ********** 2026-03-08 01:04:40.752689 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:04:40.752696 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:04:40.752704 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:04:40.752711 | orchestrator | ok: [testbed-node-3] 2026-03-08 01:04:40.752718 | orchestrator | ok: [testbed-node-4] 2026-03-08 01:04:40.752842 | orchestrator | ok: [testbed-node-5] 2026-03-08 01:04:40.752851 | orchestrator | 2026-03-08 01:04:40.752859 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-08 01:04:40.752866 | orchestrator | Sunday 08 March 2026 01:00:02 +0000 (0:00:01.198) 0:00:04.099 ********** 2026-03-08 01:04:40.752873 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:04:40.752881 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:04:40.752888 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:04:40.752895 | orchestrator | ok: [testbed-node-3] 2026-03-08 01:04:40.752909 | orchestrator | ok: [testbed-node-4] 2026-03-08 01:04:40.752916 | orchestrator | ok: [testbed-node-5] 2026-03-08 01:04:40.752923 | orchestrator | 2026-03-08 01:04:40.752930 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-08 01:04:40.752937 | orchestrator | Sunday 08 March 2026 01:00:03 +0000 (0:00:01.073) 0:00:05.173 ********** 2026-03-08 01:04:40.752945 | orchestrator | ok: [testbed-node-0] => { 2026-03-08 01:04:40.752952 | orchestrator |  "changed": false, 2026-03-08 01:04:40.752959 | orchestrator |  "msg": "All assertions passed" 2026-03-08 01:04:40.752966 | orchestrator | } 2026-03-08 01:04:40.752974 | orchestrator | ok: [testbed-node-1] => { 2026-03-08 01:04:40.752981 | orchestrator |  "changed": false, 2026-03-08 01:04:40.752988 | orchestrator |  "msg": "All assertions passed" 2026-03-08 01:04:40.752995 | orchestrator | } 2026-03-08 01:04:40.753002 | orchestrator | ok: [testbed-node-2] => { 2026-03-08 01:04:40.753011 | orchestrator |  "changed": false, 2026-03-08 01:04:40.753020 | orchestrator |  "msg": "All assertions passed" 2026-03-08 01:04:40.753029 | orchestrator | } 2026-03-08 01:04:40.753037 | orchestrator | ok: [testbed-node-3] => { 2026-03-08 01:04:40.753046 | orchestrator |  "changed": false, 2026-03-08 01:04:40.753054 | orchestrator |  "msg": "All assertions passed" 2026-03-08 01:04:40.753063 | orchestrator | } 2026-03-08 01:04:40.753071 | orchestrator | ok: [testbed-node-4] => { 2026-03-08 01:04:40.753079 | orchestrator |  "changed": false, 2026-03-08 01:04:40.753087 | orchestrator |  "msg": "All assertions passed" 2026-03-08 01:04:40.753096 | orchestrator | } 2026-03-08 01:04:40.753105 | orchestrator | ok: [testbed-node-5] => { 2026-03-08 01:04:40.753113 | orchestrator |  "changed": false, 2026-03-08 01:04:40.753122 | orchestrator |  "msg": "All assertions passed" 2026-03-08 01:04:40.753131 | orchestrator | } 2026-03-08 01:04:40.753139 | orchestrator | 2026-03-08 01:04:40.753146 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-08 01:04:40.753153 | orchestrator | Sunday 08 March 2026 01:00:04 +0000 (0:00:00.887) 0:00:06.061 ********** 2026-03-08 01:04:40.753160 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.753168 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.753175 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.753187 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.753195 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.753202 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.753209 | orchestrator | 2026-03-08 01:04:40.753216 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-08 01:04:40.753223 | orchestrator | Sunday 08 March 2026 01:00:05 +0000 (0:00:00.666) 0:00:06.728 ********** 2026-03-08 01:04:40.753231 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-08 01:04:40.753238 | orchestrator | 2026-03-08 01:04:40.753245 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-08 01:04:40.753252 | orchestrator | Sunday 08 March 2026 01:00:08 +0000 (0:00:03.314) 0:00:10.042 ********** 2026-03-08 01:04:40.753259 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-08 01:04:40.753267 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-08 01:04:40.753274 | orchestrator | 2026-03-08 01:04:40.753292 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-08 01:04:40.753300 | orchestrator | Sunday 08 March 2026 01:00:15 +0000 (0:00:06.566) 0:00:16.609 ********** 2026-03-08 01:04:40.753307 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-08 01:04:40.753314 | orchestrator | 2026-03-08 01:04:40.753321 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-08 01:04:40.753328 | orchestrator | Sunday 08 March 2026 01:00:18 +0000 (0:00:03.517) 0:00:20.126 ********** 2026-03-08 01:04:40.753335 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-08 01:04:40.753342 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-08 01:04:40.753350 | orchestrator | 2026-03-08 01:04:40.753357 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-08 01:04:40.753364 | orchestrator | Sunday 08 March 2026 01:00:22 +0000 (0:00:03.769) 0:00:23.896 ********** 2026-03-08 01:04:40.753371 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-08 01:04:40.753379 | orchestrator | 2026-03-08 01:04:40.753386 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-08 01:04:40.753393 | orchestrator | Sunday 08 March 2026 01:00:25 +0000 (0:00:03.046) 0:00:26.943 ********** 2026-03-08 01:04:40.753400 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-08 01:04:40.753407 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-08 01:04:40.753414 | orchestrator | 2026-03-08 01:04:40.753422 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-08 01:04:40.753429 | orchestrator | Sunday 08 March 2026 01:00:31 +0000 (0:00:06.353) 0:00:33.296 ********** 2026-03-08 01:04:40.753436 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.753443 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.753450 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.753457 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.753464 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.753471 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.753478 | orchestrator | 2026-03-08 01:04:40.753485 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-08 01:04:40.753493 | orchestrator | Sunday 08 March 2026 01:00:32 +0000 (0:00:00.625) 0:00:33.921 ********** 2026-03-08 01:04:40.753500 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.753507 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.753514 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.753521 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.753528 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.753535 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.753542 | orchestrator | 2026-03-08 01:04:40.753549 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-08 01:04:40.753562 | orchestrator | Sunday 08 March 2026 01:00:34 +0000 (0:00:01.970) 0:00:35.892 ********** 2026-03-08 01:04:40.753569 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:04:40.753576 | orchestrator | ok: [testbed-node-3] 2026-03-08 01:04:40.753583 | orchestrator | ok: [testbed-node-4] 2026-03-08 01:04:40.753590 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:04:40.753600 | orchestrator | ok: [testbed-node-5] 2026-03-08 01:04:40.753608 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:04:40.753615 | orchestrator | 2026-03-08 01:04:40.753622 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-08 01:04:40.753629 | orchestrator | Sunday 08 March 2026 01:00:36 +0000 (0:00:01.893) 0:00:37.786 ********** 2026-03-08 01:04:40.753636 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.753644 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.753651 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.753658 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.753665 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.753672 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.753679 | orchestrator | 2026-03-08 01:04:40.753686 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-08 01:04:40.753693 | orchestrator | Sunday 08 March 2026 01:00:38 +0000 (0:00:02.364) 0:00:40.151 ********** 2026-03-08 01:04:40.753704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:04:40.753721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:04:40.753744 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:04:40.753755 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:04:40.753771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:04:40.753779 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:04:40.753787 | orchestrator | 2026-03-08 01:04:40.753795 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-08 01:04:40.753802 | orchestrator | Sunday 08 March 2026 01:00:42 +0000 (0:00:03.443) 0:00:43.594 ********** 2026-03-08 01:04:40.753810 | orchestrator | [WARNING]: Skipped 2026-03-08 01:04:40.753817 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-08 01:04:40.753825 | orchestrator | due to this access issue: 2026-03-08 01:04:40.753832 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-08 01:04:40.753840 | orchestrator | a directory 2026-03-08 01:04:40.753847 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 01:04:40.753854 | orchestrator | 2026-03-08 01:04:40.753865 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-08 01:04:40.753873 | orchestrator | Sunday 08 March 2026 01:00:43 +0000 (0:00:01.184) 0:00:44.779 ********** 2026-03-08 01:04:40.753880 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 01:04:40.753888 | orchestrator | 2026-03-08 01:04:40.753895 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-08 01:04:40.753903 | orchestrator | Sunday 08 March 2026 01:00:44 +0000 (0:00:01.476) 0:00:46.255 ********** 2026-03-08 01:04:40.753910 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:04:40.753926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:04:40.753934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:04:40.753942 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:04:40.753956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:04:40.753968 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:04:40.753976 | orchestrator | 2026-03-08 01:04:40.753984 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-08 01:04:40.753991 | orchestrator | Sunday 08 March 2026 01:00:48 +0000 (0:00:03.800) 0:00:50.055 ********** 2026-03-08 01:04:40.754006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:04:40.754075 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.754095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:04:40.754108 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.754127 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:04:40.754141 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.754154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:04:40.754176 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.754189 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:04:40.754203 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.754221 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:04:40.754234 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.754244 | orchestrator | 2026-03-08 01:04:40.754251 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-08 01:04:40.754259 | orchestrator | Sunday 08 March 2026 01:00:52 +0000 (0:00:03.610) 0:00:53.665 ********** 2026-03-08 01:04:40.754266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:04:40.754274 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.754286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:04:40.754298 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.754305 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:04:40.754314 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.754324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:04:40.754332 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.754339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:04:40.754346 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.754354 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:04:40.754368 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.754375 | orchestrator | 2026-03-08 01:04:40.754382 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-08 01:04:40.754394 | orchestrator | Sunday 08 March 2026 01:00:55 +0000 (0:00:03.164) 0:00:56.830 ********** 2026-03-08 01:04:40.754401 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.754408 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.754415 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.754422 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.754430 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.754437 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.754444 | orchestrator | 2026-03-08 01:04:40.754451 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-08 01:04:40.754458 | orchestrator | Sunday 08 March 2026 01:00:58 +0000 (0:00:03.109) 0:00:59.939 ********** 2026-03-08 01:04:40.754465 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.754472 | orchestrator | 2026-03-08 01:04:40.754479 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-08 01:04:40.754486 | orchestrator | Sunday 08 March 2026 01:00:58 +0000 (0:00:00.109) 0:01:00.048 ********** 2026-03-08 01:04:40.754493 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.754500 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.754507 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.754514 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.754521 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.754528 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.754535 | orchestrator | 2026-03-08 01:04:40.754542 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-08 01:04:40.754550 | orchestrator | Sunday 08 March 2026 01:00:59 +0000 (0:00:00.613) 0:01:00.662 ********** 2026-03-08 01:04:40.754557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:04:40.754565 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.754575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:04:40.754587 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.754594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:04:40.754602 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.754623 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:04:40.754642 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.754654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:04:40.754667 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.754725 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:04:40.754765 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.754779 | orchestrator | 2026-03-08 01:04:40.754791 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-08 01:04:40.754802 | orchestrator | Sunday 08 March 2026 01:01:02 +0000 (0:00:03.001) 0:01:03.664 ********** 2026-03-08 01:04:40.754815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:04:40.754847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:04:40.754861 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:04:40.754875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:04:40.754893 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:04:40.754914 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:04:40.754927 | orchestrator | 2026-03-08 01:04:40.754939 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-08 01:04:40.754947 | orchestrator | Sunday 08 March 2026 01:01:06 +0000 (0:00:04.079) 0:01:07.744 ********** 2026-03-08 01:04:40.754978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:04:40.754986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:04:40.754994 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:04:40.755016 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:04:40.755030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:04:40.755042 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:04:40.755050 | orchestrator | 2026-03-08 01:04:40.755058 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-08 01:04:40.755065 | orchestrator | Sunday 08 March 2026 01:01:11 +0000 (0:00:05.319) 0:01:13.063 ********** 2026-03-08 01:04:40.755072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:04:40.755080 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.755090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:04:40.755105 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.755113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:04:40.755120 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.755128 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:04:40.755135 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.755147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:04:40.755155 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.755163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:04:40.755171 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.755179 | orchestrator | 2026-03-08 01:04:40.755188 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-08 01:04:40.755206 | orchestrator | Sunday 08 March 2026 01:01:14 +0000 (0:00:02.617) 0:01:15.681 ********** 2026-03-08 01:04:40.755219 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.755231 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.755243 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.755255 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:04:40.755266 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:04:40.755279 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:04:40.755291 | orchestrator | 2026-03-08 01:04:40.755304 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-08 01:04:40.755322 | orchestrator | Sunday 08 March 2026 01:01:17 +0000 (0:00:03.023) 0:01:18.705 ********** 2026-03-08 01:04:40.755336 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:04:40.755345 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.755352 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:04:40.755360 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.755375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:04:40.755383 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.755391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:04:40.755408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:04:40.755416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:04:40.755423 | orchestrator | 2026-03-08 01:04:40.755431 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-08 01:04:40.755438 | orchestrator | Sunday 08 March 2026 01:01:21 +0000 (0:00:04.003) 0:01:22.709 ********** 2026-03-08 01:04:40.755446 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.755453 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.755460 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.755467 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.755474 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.755481 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.755488 | orchestrator | 2026-03-08 01:04:40.755495 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-08 01:04:40.755502 | orchestrator | Sunday 08 March 2026 01:01:23 +0000 (0:00:02.110) 0:01:24.819 ********** 2026-03-08 01:04:40.755510 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.755517 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.755524 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.755531 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.755538 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.755545 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.755553 | orchestrator | 2026-03-08 01:04:40.755560 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-08 01:04:40.755567 | orchestrator | Sunday 08 March 2026 01:01:25 +0000 (0:00:02.137) 0:01:26.957 ********** 2026-03-08 01:04:40.755579 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.755586 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.755594 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.755601 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.755608 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.755615 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.755623 | orchestrator | 2026-03-08 01:04:40.755630 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-08 01:04:40.755642 | orchestrator | Sunday 08 March 2026 01:01:28 +0000 (0:00:02.705) 0:01:29.663 ********** 2026-03-08 01:04:40.755649 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.755656 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.755663 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.755671 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.755678 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.755685 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.755692 | orchestrator | 2026-03-08 01:04:40.755699 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-08 01:04:40.755706 | orchestrator | Sunday 08 March 2026 01:01:30 +0000 (0:00:02.493) 0:01:32.156 ********** 2026-03-08 01:04:40.755713 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.755721 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.755771 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.755783 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.755790 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.755797 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.755805 | orchestrator | 2026-03-08 01:04:40.755812 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-08 01:04:40.755819 | orchestrator | Sunday 08 March 2026 01:01:32 +0000 (0:00:02.201) 0:01:34.357 ********** 2026-03-08 01:04:40.755826 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.755833 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.755840 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.755847 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.755854 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.755862 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.755869 | orchestrator | 2026-03-08 01:04:40.755877 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-08 01:04:40.755884 | orchestrator | Sunday 08 March 2026 01:01:36 +0000 (0:00:03.267) 0:01:37.624 ********** 2026-03-08 01:04:40.755891 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-08 01:04:40.755899 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.755906 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-08 01:04:40.755913 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.755920 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-08 01:04:40.755927 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.755939 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-08 01:04:40.755946 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.755953 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-08 01:04:40.755960 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.755967 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-08 01:04:40.755974 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.755981 | orchestrator | 2026-03-08 01:04:40.755989 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-08 01:04:40.755996 | orchestrator | Sunday 08 March 2026 01:01:38 +0000 (0:00:02.019) 0:01:39.644 ********** 2026-03-08 01:04:40.756003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:04:40.756020 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.756035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:04:40.756043 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.756050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:04:40.756059 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.756066 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:04:40.756077 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.756085 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:04:40.756097 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.756104 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:04:40.756112 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.756119 | orchestrator | 2026-03-08 01:04:40.756126 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-08 01:04:40.756133 | orchestrator | Sunday 08 March 2026 01:01:40 +0000 (0:00:01.978) 0:01:41.623 ********** 2026-03-08 01:04:40.756380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:04:40.756403 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.756416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:04:40.756430 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.756445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:04:40.756460 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.756468 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:04:40.756475 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.756490 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:04:40.756498 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.756505 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:04:40.756512 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.756518 | orchestrator | 2026-03-08 01:04:40.756525 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-08 01:04:40.756532 | orchestrator | Sunday 08 March 2026 01:01:42 +0000 (0:00:02.060) 0:01:43.683 ********** 2026-03-08 01:04:40.756539 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.756546 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.756552 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.756559 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.756566 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.756572 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.756579 | orchestrator | 2026-03-08 01:04:40.756586 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-08 01:04:40.756592 | orchestrator | Sunday 08 March 2026 01:01:44 +0000 (0:00:02.619) 0:01:46.302 ********** 2026-03-08 01:04:40.756599 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.756610 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.756625 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.756640 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:04:40.756651 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:04:40.756662 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:04:40.756673 | orchestrator | 2026-03-08 01:04:40.756682 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-08 01:04:40.756700 | orchestrator | Sunday 08 March 2026 01:01:48 +0000 (0:00:04.063) 0:01:50.365 ********** 2026-03-08 01:04:40.756712 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.756722 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.756753 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.756765 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.756776 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.756791 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.756798 | orchestrator | 2026-03-08 01:04:40.756805 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-08 01:04:40.756812 | orchestrator | Sunday 08 March 2026 01:01:50 +0000 (0:00:01.902) 0:01:52.268 ********** 2026-03-08 01:04:40.756819 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.756825 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.756832 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.756839 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.756846 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.756852 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.756859 | orchestrator | 2026-03-08 01:04:40.756866 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-08 01:04:40.756873 | orchestrator | Sunday 08 March 2026 01:01:53 +0000 (0:00:02.243) 0:01:54.511 ********** 2026-03-08 01:04:40.756879 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.756886 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.756893 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.756899 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.756906 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.756912 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.756919 | orchestrator | 2026-03-08 01:04:40.756926 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-08 01:04:40.756933 | orchestrator | Sunday 08 March 2026 01:01:56 +0000 (0:00:03.033) 0:01:57.545 ********** 2026-03-08 01:04:40.756939 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.756946 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.756952 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.756959 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.756966 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.756972 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.756979 | orchestrator | 2026-03-08 01:04:40.756987 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-08 01:04:40.756995 | orchestrator | Sunday 08 March 2026 01:01:58 +0000 (0:00:02.408) 0:01:59.953 ********** 2026-03-08 01:04:40.757003 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.757011 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.757019 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.757027 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.757035 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.757043 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.757051 | orchestrator | 2026-03-08 01:04:40.757059 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-08 01:04:40.757067 | orchestrator | Sunday 08 March 2026 01:02:00 +0000 (0:00:01.902) 0:02:01.856 ********** 2026-03-08 01:04:40.757075 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.757083 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.757091 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.757099 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.757106 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.757114 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.757122 | orchestrator | 2026-03-08 01:04:40.757130 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-08 01:04:40.757144 | orchestrator | Sunday 08 March 2026 01:02:02 +0000 (0:00:02.114) 0:02:03.970 ********** 2026-03-08 01:04:40.757158 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.757166 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.757173 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.757181 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.757189 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.757196 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.757204 | orchestrator | 2026-03-08 01:04:40.757212 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-08 01:04:40.757220 | orchestrator | Sunday 08 March 2026 01:02:04 +0000 (0:00:02.515) 0:02:06.486 ********** 2026-03-08 01:04:40.757227 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-08 01:04:40.757235 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.757243 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-08 01:04:40.757251 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.757259 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-08 01:04:40.757267 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.757275 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-08 01:04:40.757283 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.757291 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-08 01:04:40.757299 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.757307 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-08 01:04:40.757314 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.757322 | orchestrator | 2026-03-08 01:04:40.757330 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-08 01:04:40.757338 | orchestrator | Sunday 08 March 2026 01:02:07 +0000 (0:00:02.639) 0:02:09.125 ********** 2026-03-08 01:04:40.757353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:04:40.757361 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.757368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:04:40.757376 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.757390 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:04:40.757398 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.757405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:04:40.757412 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.757419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:04:40.757426 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.757436 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:04:40.757445 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.757457 | orchestrator | 2026-03-08 01:04:40.757473 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-08 01:04:40.757487 | orchestrator | Sunday 08 March 2026 01:02:09 +0000 (0:00:01.820) 0:02:10.946 ********** 2026-03-08 01:04:40.757499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:04:40.757528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:04:40.757540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:04:40.757556 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:04:40.757569 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:04:40.757588 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:04:40.757601 | orchestrator | 2026-03-08 01:04:40.757613 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-08 01:04:40.757625 | orchestrator | Sunday 08 March 2026 01:02:13 +0000 (0:00:03.838) 0:02:14.785 ********** 2026-03-08 01:04:40.757637 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.757649 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.757656 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.757662 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:04:40.757669 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:04:40.757681 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:04:40.757688 | orchestrator | 2026-03-08 01:04:40.757694 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-08 01:04:40.757701 | orchestrator | Sunday 08 March 2026 01:02:13 +0000 (0:00:00.670) 0:02:15.456 ********** 2026-03-08 01:04:40.757708 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:04:40.757714 | orchestrator | 2026-03-08 01:04:40.757721 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-08 01:04:40.757751 | orchestrator | Sunday 08 March 2026 01:02:16 +0000 (0:00:02.217) 0:02:17.673 ********** 2026-03-08 01:04:40.757760 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:04:40.757767 | orchestrator | 2026-03-08 01:04:40.757774 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-08 01:04:40.757780 | orchestrator | Sunday 08 March 2026 01:02:18 +0000 (0:00:02.312) 0:02:19.985 ********** 2026-03-08 01:04:40.757787 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:04:40.757794 | orchestrator | 2026-03-08 01:04:40.757800 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-08 01:04:40.757807 | orchestrator | Sunday 08 March 2026 01:03:02 +0000 (0:00:44.498) 0:03:04.485 ********** 2026-03-08 01:04:40.757814 | orchestrator | 2026-03-08 01:04:40.757820 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-08 01:04:40.757827 | orchestrator | Sunday 08 March 2026 01:03:03 +0000 (0:00:00.067) 0:03:04.552 ********** 2026-03-08 01:04:40.757833 | orchestrator | 2026-03-08 01:04:40.757840 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-08 01:04:40.757847 | orchestrator | Sunday 08 March 2026 01:03:03 +0000 (0:00:00.284) 0:03:04.837 ********** 2026-03-08 01:04:40.757853 | orchestrator | 2026-03-08 01:04:40.757860 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-08 01:04:40.757867 | orchestrator | Sunday 08 March 2026 01:03:03 +0000 (0:00:00.068) 0:03:04.905 ********** 2026-03-08 01:04:40.757873 | orchestrator | 2026-03-08 01:04:40.757880 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-08 01:04:40.757887 | orchestrator | Sunday 08 March 2026 01:03:03 +0000 (0:00:00.066) 0:03:04.972 ********** 2026-03-08 01:04:40.757893 | orchestrator | 2026-03-08 01:04:40.757900 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-08 01:04:40.757907 | orchestrator | Sunday 08 March 2026 01:03:03 +0000 (0:00:00.066) 0:03:05.039 ********** 2026-03-08 01:04:40.757913 | orchestrator | 2026-03-08 01:04:40.757920 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-08 01:04:40.757933 | orchestrator | Sunday 08 March 2026 01:03:03 +0000 (0:00:00.082) 0:03:05.121 ********** 2026-03-08 01:04:40.757940 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:04:40.757947 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:04:40.757953 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:04:40.757960 | orchestrator | 2026-03-08 01:04:40.757967 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-08 01:04:40.757973 | orchestrator | Sunday 08 March 2026 01:03:33 +0000 (0:00:29.723) 0:03:34.844 ********** 2026-03-08 01:04:40.757984 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:04:40.757990 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:04:40.757997 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:04:40.758004 | orchestrator | 2026-03-08 01:04:40.758011 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:04:40.758054 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-08 01:04:40.758063 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-08 01:04:40.758070 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-08 01:04:40.758077 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-08 01:04:40.758084 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-08 01:04:40.758090 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-08 01:04:40.758097 | orchestrator | 2026-03-08 01:04:40.758104 | orchestrator | 2026-03-08 01:04:40.758111 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:04:40.758117 | orchestrator | Sunday 08 March 2026 01:04:37 +0000 (0:01:03.852) 0:04:38.697 ********** 2026-03-08 01:04:40.758124 | orchestrator | =============================================================================== 2026-03-08 01:04:40.758131 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 63.85s 2026-03-08 01:04:40.758137 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 44.50s 2026-03-08 01:04:40.758144 | orchestrator | neutron : Restart neutron-server container ----------------------------- 29.72s 2026-03-08 01:04:40.758151 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.57s 2026-03-08 01:04:40.758157 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 6.35s 2026-03-08 01:04:40.758164 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.32s 2026-03-08 01:04:40.758170 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.08s 2026-03-08 01:04:40.758177 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.06s 2026-03-08 01:04:40.758194 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.00s 2026-03-08 01:04:40.758204 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.84s 2026-03-08 01:04:40.758211 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.80s 2026-03-08 01:04:40.758218 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.77s 2026-03-08 01:04:40.758224 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.61s 2026-03-08 01:04:40.758231 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.52s 2026-03-08 01:04:40.758238 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.44s 2026-03-08 01:04:40.758245 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.31s 2026-03-08 01:04:40.758258 | orchestrator | neutron : Copying over dhcp_agent.ini ----------------------------------- 3.27s 2026-03-08 01:04:40.758264 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.16s 2026-03-08 01:04:40.758271 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 3.11s 2026-03-08 01:04:40.758278 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.05s 2026-03-08 01:04:40.758284 | orchestrator | 2026-03-08 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:43.792640 | orchestrator | 2026-03-08 01:04:43 | INFO  | Task ff905af9-8d54-4aa6-91b9-b9d4820477cf is in state STARTED 2026-03-08 01:04:43.793964 | orchestrator | 2026-03-08 01:04:43 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:04:43.795098 | orchestrator | 2026-03-08 01:04:43 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:04:43.797346 | orchestrator | 2026-03-08 01:04:43 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state STARTED 2026-03-08 01:04:43.797400 | orchestrator | 2026-03-08 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:46.836142 | orchestrator | 2026-03-08 01:04:46 | INFO  | Task ff905af9-8d54-4aa6-91b9-b9d4820477cf is in state STARTED 2026-03-08 01:04:46.837280 | orchestrator | 2026-03-08 01:04:46 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:04:46.838355 | orchestrator | 2026-03-08 01:04:46 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:04:46.840302 | orchestrator | 2026-03-08 01:04:46 | INFO  | Task 86811de9-08d6-4141-b19a-b3700894139e is in state SUCCESS 2026-03-08 01:04:46.840332 | orchestrator | 2026-03-08 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:46.841813 | orchestrator | 2026-03-08 01:04:46.841841 | orchestrator | 2026-03-08 01:04:46.841850 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:04:46.841858 | orchestrator | 2026-03-08 01:04:46.841866 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:04:46.841874 | orchestrator | Sunday 08 March 2026 01:02:48 +0000 (0:00:00.267) 0:00:00.267 ********** 2026-03-08 01:04:46.841882 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:04:46.841890 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:04:46.841898 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:04:46.841906 | orchestrator | 2026-03-08 01:04:46.841914 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:04:46.841922 | orchestrator | Sunday 08 March 2026 01:02:48 +0000 (0:00:00.346) 0:00:00.614 ********** 2026-03-08 01:04:46.841930 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-08 01:04:46.841938 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-08 01:04:46.841946 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-08 01:04:46.841954 | orchestrator | 2026-03-08 01:04:46.841962 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-08 01:04:46.841970 | orchestrator | 2026-03-08 01:04:46.841978 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-08 01:04:46.841986 | orchestrator | Sunday 08 March 2026 01:02:48 +0000 (0:00:00.447) 0:00:01.061 ********** 2026-03-08 01:04:46.841994 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:04:46.842002 | orchestrator | 2026-03-08 01:04:46.842010 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-08 01:04:46.842075 | orchestrator | Sunday 08 March 2026 01:02:49 +0000 (0:00:00.598) 0:00:01.660 ********** 2026-03-08 01:04:46.842091 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-08 01:04:46.842127 | orchestrator | 2026-03-08 01:04:46.842143 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-08 01:04:46.842158 | orchestrator | Sunday 08 March 2026 01:02:52 +0000 (0:00:03.483) 0:00:05.144 ********** 2026-03-08 01:04:46.842173 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-08 01:04:46.842188 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-08 01:04:46.842203 | orchestrator | 2026-03-08 01:04:46.842216 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-08 01:04:46.842225 | orchestrator | Sunday 08 March 2026 01:02:59 +0000 (0:00:06.111) 0:00:11.255 ********** 2026-03-08 01:04:46.842233 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-08 01:04:46.842240 | orchestrator | 2026-03-08 01:04:46.842248 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-08 01:04:46.842256 | orchestrator | Sunday 08 March 2026 01:03:02 +0000 (0:00:03.104) 0:00:14.360 ********** 2026-03-08 01:04:46.842264 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-08 01:04:46.842272 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-08 01:04:46.842280 | orchestrator | 2026-03-08 01:04:46.842288 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-08 01:04:46.842296 | orchestrator | Sunday 08 March 2026 01:03:06 +0000 (0:00:03.994) 0:00:18.354 ********** 2026-03-08 01:04:46.842303 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-08 01:04:46.842311 | orchestrator | 2026-03-08 01:04:46.842319 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-08 01:04:46.842328 | orchestrator | Sunday 08 March 2026 01:03:09 +0000 (0:00:03.441) 0:00:21.795 ********** 2026-03-08 01:04:46.842336 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-08 01:04:46.842344 | orchestrator | 2026-03-08 01:04:46.842352 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-08 01:04:46.842360 | orchestrator | Sunday 08 March 2026 01:03:13 +0000 (0:00:03.547) 0:00:25.343 ********** 2026-03-08 01:04:46.842368 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:04:46.842375 | orchestrator | 2026-03-08 01:04:46.842383 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-08 01:04:46.842391 | orchestrator | Sunday 08 March 2026 01:03:16 +0000 (0:00:03.186) 0:00:28.530 ********** 2026-03-08 01:04:46.842399 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:04:46.842407 | orchestrator | 2026-03-08 01:04:46.842415 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-08 01:04:46.842423 | orchestrator | Sunday 08 March 2026 01:03:20 +0000 (0:00:03.912) 0:00:32.443 ********** 2026-03-08 01:04:46.842431 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:04:46.842441 | orchestrator | 2026-03-08 01:04:46.842451 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-08 01:04:46.842460 | orchestrator | Sunday 08 March 2026 01:03:23 +0000 (0:00:03.520) 0:00:35.963 ********** 2026-03-08 01:04:46.842484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:46.842504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:46.842514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:46.842550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:46.842561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:46.842580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:46.842595 | orchestrator | 2026-03-08 01:04:46.842604 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-08 01:04:46.842614 | orchestrator | Sunday 08 March 2026 01:03:25 +0000 (0:00:01.599) 0:00:37.562 ********** 2026-03-08 01:04:46.842623 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:46.842632 | orchestrator | 2026-03-08 01:04:46.842641 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-08 01:04:46.842650 | orchestrator | Sunday 08 March 2026 01:03:25 +0000 (0:00:00.122) 0:00:37.685 ********** 2026-03-08 01:04:46.842659 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:46.842668 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:46.842677 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:46.842686 | orchestrator | 2026-03-08 01:04:46.842696 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-08 01:04:46.842705 | orchestrator | Sunday 08 March 2026 01:03:26 +0000 (0:00:00.555) 0:00:38.241 ********** 2026-03-08 01:04:46.842715 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 01:04:46.842809 | orchestrator | 2026-03-08 01:04:46.842818 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-08 01:04:46.842829 | orchestrator | Sunday 08 March 2026 01:03:26 +0000 (0:00:00.953) 0:00:39.194 ********** 2026-03-08 01:04:46.842837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:46.842846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:46.842855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:46.842879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:46.842888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:46.842897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:46.842905 | orchestrator | 2026-03-08 01:04:46.842913 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-08 01:04:46.842921 | orchestrator | Sunday 08 March 2026 01:03:29 +0000 (0:00:02.448) 0:00:41.642 ********** 2026-03-08 01:04:46.842929 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:04:46.842937 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:04:46.842944 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:04:46.842952 | orchestrator | 2026-03-08 01:04:46.842960 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-08 01:04:46.842968 | orchestrator | Sunday 08 March 2026 01:03:29 +0000 (0:00:00.302) 0:00:41.945 ********** 2026-03-08 01:04:46.842976 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:04:46.842984 | orchestrator | 2026-03-08 01:04:46.842992 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-08 01:04:46.843000 | orchestrator | Sunday 08 March 2026 01:03:30 +0000 (0:00:01.015) 0:00:42.961 ********** 2026-03-08 01:04:46.843008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:46.843030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:46.843039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:46.843048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:46.843056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:46.843064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:46.843080 | orchestrator | 2026-03-08 01:04:46.843093 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-08 01:04:46.843106 | orchestrator | Sunday 08 March 2026 01:03:33 +0000 (0:00:02.561) 0:00:45.522 ********** 2026-03-08 01:04:46.843142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-08 01:04:46.843158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:04:46.843171 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:46.843185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-08 01:04:46.843198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:04:46.843218 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:46.843245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-08 01:04:46.843268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:04:46.843283 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:46.843296 | orchestrator | 2026-03-08 01:04:46.843311 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-08 01:04:46.843326 | orchestrator | Sunday 08 March 2026 01:03:35 +0000 (0:00:01.684) 0:00:47.207 ********** 2026-03-08 01:04:46.843337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-08 01:04:46.843346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:04:46.843368 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:46.843393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-08 01:04:46.843424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:04:46.843441 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:46.843456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-08 01:04:46.843472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:04:46.843488 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:46.843503 | orchestrator | 2026-03-08 01:04:46.843518 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-08 01:04:46.843533 | orchestrator | Sunday 08 March 2026 01:03:37 +0000 (0:00:02.485) 0:00:49.692 ********** 2026-03-08 01:04:46.843543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:46.843559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:46.843580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:46.843590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:46.843599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:46.843613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:46.843622 | orchestrator | 2026-03-08 01:04:46.843633 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-08 01:04:46.843647 | orchestrator | Sunday 08 March 2026 01:03:40 +0000 (0:00:02.525) 0:00:52.217 ********** 2026-03-08 01:04:46.843666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:46.843689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:46.843705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:46.843741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:46.843765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:46.843787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:46.843803 | orchestrator | 2026-03-08 01:04:46.843818 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-08 01:04:46.843839 | orchestrator | Sunday 08 March 2026 01:03:50 +0000 (0:00:10.099) 0:01:02.316 ********** 2026-03-08 01:04:46.843849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-08 01:04:46.843858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:04:46.843873 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:46.843887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-08 01:04:46.843902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:04:46.843917 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:46.843945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-08 01:04:46.843962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:04:46.843978 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:46.843993 | orchestrator | 2026-03-08 01:04:46.844009 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-08 01:04:46.844024 | orchestrator | Sunday 08 March 2026 01:03:51 +0000 (0:00:01.639) 0:01:03.956 ********** 2026-03-08 01:04:46.844034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:46.844050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:46.844063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:46.844083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:46.844100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:46.844123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:46.844139 | orchestrator | 2026-03-08 01:04:46.844149 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-08 01:04:46.844157 | orchestrator | Sunday 08 March 2026 01:03:55 +0000 (0:00:03.477) 0:01:07.433 ********** 2026-03-08 01:04:46.844166 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:46.844175 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:46.844183 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:46.844192 | orchestrator | 2026-03-08 01:04:46.844201 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-08 01:04:46.844210 | orchestrator | Sunday 08 March 2026 01:03:55 +0000 (0:00:00.649) 0:01:08.082 ********** 2026-03-08 01:04:46.844225 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:04:46.844240 | orchestrator | 2026-03-08 01:04:46.844255 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-08 01:04:46.844271 | orchestrator | Sunday 08 March 2026 01:03:57 +0000 (0:00:01.937) 0:01:10.020 ********** 2026-03-08 01:04:46.844286 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:04:46.844310 | orchestrator | 2026-03-08 01:04:46.844353 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-08 01:04:46.844369 | orchestrator | Sunday 08 March 2026 01:03:59 +0000 (0:00:01.986) 0:01:12.007 ********** 2026-03-08 01:04:46.844382 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:04:46.844396 | orchestrator | 2026-03-08 01:04:46.844409 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-08 01:04:46.844423 | orchestrator | Sunday 08 March 2026 01:04:15 +0000 (0:00:16.013) 0:01:28.020 ********** 2026-03-08 01:04:46.844438 | orchestrator | 2026-03-08 01:04:46.844453 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-08 01:04:46.844467 | orchestrator | Sunday 08 March 2026 01:04:15 +0000 (0:00:00.072) 0:01:28.092 ********** 2026-03-08 01:04:46.844481 | orchestrator | 2026-03-08 01:04:46.844496 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-08 01:04:46.844511 | orchestrator | Sunday 08 March 2026 01:04:15 +0000 (0:00:00.071) 0:01:28.164 ********** 2026-03-08 01:04:46.844525 | orchestrator | 2026-03-08 01:04:46.844541 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-08 01:04:46.844553 | orchestrator | Sunday 08 March 2026 01:04:16 +0000 (0:00:00.088) 0:01:28.252 ********** 2026-03-08 01:04:46.844561 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:04:46.844570 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:04:46.844578 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:04:46.844587 | orchestrator | 2026-03-08 01:04:46.844596 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-08 01:04:46.844610 | orchestrator | Sunday 08 March 2026 01:04:29 +0000 (0:00:13.650) 0:01:41.903 ********** 2026-03-08 01:04:46.844619 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:04:46.844628 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:04:46.844636 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:04:46.844656 | orchestrator | 2026-03-08 01:04:46.844673 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:04:46.844683 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-08 01:04:46.844693 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-08 01:04:46.844702 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-08 01:04:46.844710 | orchestrator | 2026-03-08 01:04:46.844743 | orchestrator | 2026-03-08 01:04:46.844759 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:04:46.844773 | orchestrator | Sunday 08 March 2026 01:04:45 +0000 (0:00:16.149) 0:01:58.053 ********** 2026-03-08 01:04:46.844782 | orchestrator | =============================================================================== 2026-03-08 01:04:46.844791 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 16.15s 2026-03-08 01:04:46.844799 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.01s 2026-03-08 01:04:46.844808 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 13.65s 2026-03-08 01:04:46.844816 | orchestrator | magnum : Copying over magnum.conf -------------------------------------- 10.10s 2026-03-08 01:04:46.844825 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.11s 2026-03-08 01:04:46.844833 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.99s 2026-03-08 01:04:46.844842 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.91s 2026-03-08 01:04:46.844850 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.55s 2026-03-08 01:04:46.844859 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.52s 2026-03-08 01:04:46.844867 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.48s 2026-03-08 01:04:46.844876 | orchestrator | magnum : Check magnum containers ---------------------------------------- 3.48s 2026-03-08 01:04:46.844884 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.44s 2026-03-08 01:04:46.844893 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.19s 2026-03-08 01:04:46.844901 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.10s 2026-03-08 01:04:46.844910 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.56s 2026-03-08 01:04:46.844919 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.53s 2026-03-08 01:04:46.844927 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 2.48s 2026-03-08 01:04:46.844936 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.45s 2026-03-08 01:04:46.844944 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 1.99s 2026-03-08 01:04:46.844953 | orchestrator | magnum : Creating Magnum database --------------------------------------- 1.94s 2026-03-08 01:04:49.878792 | orchestrator | 2026-03-08 01:04:49 | INFO  | Task ff905af9-8d54-4aa6-91b9-b9d4820477cf is in state STARTED 2026-03-08 01:04:49.882315 | orchestrator | 2026-03-08 01:04:49 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:04:49.883109 | orchestrator | 2026-03-08 01:04:49 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:04:49.886162 | orchestrator | 2026-03-08 01:04:49 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:04:49.886204 | orchestrator | 2026-03-08 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:52.941768 | orchestrator | 2026-03-08 01:04:52 | INFO  | Task ff905af9-8d54-4aa6-91b9-b9d4820477cf is in state STARTED 2026-03-08 01:04:52.941849 | orchestrator | 2026-03-08 01:04:52 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:04:52.942783 | orchestrator | 2026-03-08 01:04:52 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:04:52.944234 | orchestrator | 2026-03-08 01:04:52 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:04:52.944268 | orchestrator | 2026-03-08 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:55.995495 | orchestrator | 2026-03-08 01:04:55 | INFO  | Task ff905af9-8d54-4aa6-91b9-b9d4820477cf is in state STARTED 2026-03-08 01:04:55.995556 | orchestrator | 2026-03-08 01:04:56 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:04:55.999406 | orchestrator | 2026-03-08 01:04:56 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:04:56.002037 | orchestrator | 2026-03-08 01:04:56 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:04:56.002078 | orchestrator | 2026-03-08 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:59.053433 | orchestrator | 2026-03-08 01:04:59 | INFO  | Task ff905af9-8d54-4aa6-91b9-b9d4820477cf is in state STARTED 2026-03-08 01:04:59.054913 | orchestrator | 2026-03-08 01:04:59 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:04:59.056611 | orchestrator | 2026-03-08 01:04:59 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:04:59.058571 | orchestrator | 2026-03-08 01:04:59 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:04:59.058634 | orchestrator | 2026-03-08 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:02.107248 | orchestrator | 2026-03-08 01:05:02 | INFO  | Task ff905af9-8d54-4aa6-91b9-b9d4820477cf is in state STARTED 2026-03-08 01:05:02.111347 | orchestrator | 2026-03-08 01:05:02 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:05:02.113934 | orchestrator | 2026-03-08 01:05:02 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:05:02.115998 | orchestrator | 2026-03-08 01:05:02 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:05:02.116044 | orchestrator | 2026-03-08 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:05.156387 | orchestrator | 2026-03-08 01:05:05 | INFO  | Task ff905af9-8d54-4aa6-91b9-b9d4820477cf is in state STARTED 2026-03-08 01:05:05.160913 | orchestrator | 2026-03-08 01:05:05 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:05:05.160964 | orchestrator | 2026-03-08 01:05:05 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:05:05.162183 | orchestrator | 2026-03-08 01:05:05 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:05:05.162217 | orchestrator | 2026-03-08 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:08.204663 | orchestrator | 2026-03-08 01:05:08 | INFO  | Task ff905af9-8d54-4aa6-91b9-b9d4820477cf is in state STARTED 2026-03-08 01:05:08.206064 | orchestrator | 2026-03-08 01:05:08 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:05:08.208075 | orchestrator | 2026-03-08 01:05:08 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:05:08.208625 | orchestrator | 2026-03-08 01:05:08 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:05:08.208742 | orchestrator | 2026-03-08 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:11.524934 | orchestrator | 2026-03-08 01:05:11 | INFO  | Task ff905af9-8d54-4aa6-91b9-b9d4820477cf is in state STARTED 2026-03-08 01:05:11.524991 | orchestrator | 2026-03-08 01:05:11 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:05:11.524997 | orchestrator | 2026-03-08 01:05:11 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:05:11.525001 | orchestrator | 2026-03-08 01:05:11 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:05:11.525006 | orchestrator | 2026-03-08 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:14.411197 | orchestrator | 2026-03-08 01:05:14 | INFO  | Task ff905af9-8d54-4aa6-91b9-b9d4820477cf is in state STARTED 2026-03-08 01:05:14.412356 | orchestrator | 2026-03-08 01:05:14 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:05:14.413845 | orchestrator | 2026-03-08 01:05:14 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:05:14.415057 | orchestrator | 2026-03-08 01:05:14 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:05:14.415612 | orchestrator | 2026-03-08 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:17.461221 | orchestrator | 2026-03-08 01:05:17 | INFO  | Task ff905af9-8d54-4aa6-91b9-b9d4820477cf is in state SUCCESS 2026-03-08 01:05:17.461972 | orchestrator | 2026-03-08 01:05:17 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:05:17.463083 | orchestrator | 2026-03-08 01:05:17 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:05:17.464398 | orchestrator | 2026-03-08 01:05:17 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:05:17.464426 | orchestrator | 2026-03-08 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:20.495229 | orchestrator | 2026-03-08 01:05:20 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:05:20.499140 | orchestrator | 2026-03-08 01:05:20 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:05:20.501238 | orchestrator | 2026-03-08 01:05:20 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:05:20.502608 | orchestrator | 2026-03-08 01:05:20 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:05:20.502639 | orchestrator | 2026-03-08 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:23.534799 | orchestrator | 2026-03-08 01:05:23 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:05:23.535429 | orchestrator | 2026-03-08 01:05:23 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:05:23.537483 | orchestrator | 2026-03-08 01:05:23 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:05:23.538825 | orchestrator | 2026-03-08 01:05:23 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:05:23.538925 | orchestrator | 2026-03-08 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:26.561677 | orchestrator | 2026-03-08 01:05:26 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:05:26.562889 | orchestrator | 2026-03-08 01:05:26 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:05:26.564984 | orchestrator | 2026-03-08 01:05:26 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:05:26.566441 | orchestrator | 2026-03-08 01:05:26 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:05:26.566486 | orchestrator | 2026-03-08 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:29.609519 | orchestrator | 2026-03-08 01:05:29 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:05:29.614547 | orchestrator | 2026-03-08 01:05:29 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:05:29.616408 | orchestrator | 2026-03-08 01:05:29 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:05:29.619841 | orchestrator | 2026-03-08 01:05:29 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:05:29.619913 | orchestrator | 2026-03-08 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:32.663041 | orchestrator | 2026-03-08 01:05:32 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:05:32.663919 | orchestrator | 2026-03-08 01:05:32 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:05:32.665022 | orchestrator | 2026-03-08 01:05:32 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:05:32.665928 | orchestrator | 2026-03-08 01:05:32 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:05:32.666168 | orchestrator | 2026-03-08 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:35.700101 | orchestrator | 2026-03-08 01:05:35 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:05:35.700789 | orchestrator | 2026-03-08 01:05:35 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:05:35.702151 | orchestrator | 2026-03-08 01:05:35 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:05:35.703828 | orchestrator | 2026-03-08 01:05:35 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:05:35.704734 | orchestrator | 2026-03-08 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:38.739037 | orchestrator | 2026-03-08 01:05:38 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:05:38.739558 | orchestrator | 2026-03-08 01:05:38 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:05:38.741345 | orchestrator | 2026-03-08 01:05:38 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:05:38.742188 | orchestrator | 2026-03-08 01:05:38 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:05:38.742223 | orchestrator | 2026-03-08 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:41.783996 | orchestrator | 2026-03-08 01:05:41 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:05:41.784236 | orchestrator | 2026-03-08 01:05:41 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:05:41.785162 | orchestrator | 2026-03-08 01:05:41 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state STARTED 2026-03-08 01:05:41.785952 | orchestrator | 2026-03-08 01:05:41 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:05:41.785981 | orchestrator | 2026-03-08 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:44.829910 | orchestrator | 2026-03-08 01:05:44 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:05:44.830206 | orchestrator | 2026-03-08 01:05:44 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:05:44.830744 | orchestrator | 2026-03-08 01:05:44 | INFO  | Task d9809e32-2d9b-4b18-b964-29ee706a6140 is in state SUCCESS 2026-03-08 01:05:44.831120 | orchestrator | 2026-03-08 01:05:44.831152 | orchestrator | 2026-03-08 01:05:44.831160 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:05:44.831167 | orchestrator | 2026-03-08 01:05:44.831174 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:05:44.831242 | orchestrator | Sunday 08 March 2026 01:04:42 +0000 (0:00:00.494) 0:00:00.494 ********** 2026-03-08 01:05:44.831250 | orchestrator | ok: [testbed-manager] 2026-03-08 01:05:44.831259 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:05:44.831263 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:05:44.831267 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:05:44.831271 | orchestrator | ok: [testbed-node-3] 2026-03-08 01:05:44.831275 | orchestrator | ok: [testbed-node-4] 2026-03-08 01:05:44.831279 | orchestrator | ok: [testbed-node-5] 2026-03-08 01:05:44.831283 | orchestrator | 2026-03-08 01:05:44.831287 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:05:44.831291 | orchestrator | Sunday 08 March 2026 01:04:43 +0000 (0:00:00.857) 0:00:01.352 ********** 2026-03-08 01:05:44.831296 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-08 01:05:44.831301 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-08 01:05:44.831307 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-08 01:05:44.831313 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-08 01:05:44.831319 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-08 01:05:44.831325 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-08 01:05:44.831331 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-08 01:05:44.831337 | orchestrator | 2026-03-08 01:05:44.831343 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-08 01:05:44.831349 | orchestrator | 2026-03-08 01:05:44.831356 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-08 01:05:44.831362 | orchestrator | Sunday 08 March 2026 01:04:44 +0000 (0:00:00.932) 0:00:02.284 ********** 2026-03-08 01:05:44.831369 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 01:05:44.831376 | orchestrator | 2026-03-08 01:05:44.831382 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-08 01:05:44.831385 | orchestrator | Sunday 08 March 2026 01:04:45 +0000 (0:00:01.355) 0:00:03.639 ********** 2026-03-08 01:05:44.831389 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-08 01:05:44.831393 | orchestrator | 2026-03-08 01:05:44.831397 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-08 01:05:44.831400 | orchestrator | Sunday 08 March 2026 01:04:49 +0000 (0:00:03.335) 0:00:06.975 ********** 2026-03-08 01:05:44.831405 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-08 01:05:44.831410 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-08 01:05:44.831413 | orchestrator | 2026-03-08 01:05:44.831417 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-08 01:05:44.831421 | orchestrator | Sunday 08 March 2026 01:04:55 +0000 (0:00:06.731) 0:00:13.706 ********** 2026-03-08 01:05:44.831425 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-08 01:05:44.831428 | orchestrator | 2026-03-08 01:05:44.831432 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-08 01:05:44.831436 | orchestrator | Sunday 08 March 2026 01:04:59 +0000 (0:00:03.189) 0:00:16.895 ********** 2026-03-08 01:05:44.831452 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-08 01:05:44.831457 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-08 01:05:44.831461 | orchestrator | 2026-03-08 01:05:44.831464 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-08 01:05:44.831468 | orchestrator | Sunday 08 March 2026 01:05:03 +0000 (0:00:04.231) 0:00:21.127 ********** 2026-03-08 01:05:44.831475 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-08 01:05:44.831481 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-08 01:05:44.831490 | orchestrator | 2026-03-08 01:05:44.831497 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-08 01:05:44.831502 | orchestrator | Sunday 08 March 2026 01:05:10 +0000 (0:00:07.351) 0:00:28.479 ********** 2026-03-08 01:05:44.831530 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-08 01:05:44.831537 | orchestrator | 2026-03-08 01:05:44.831543 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:05:44.831549 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:05:44.831556 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:05:44.831562 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:05:44.831569 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:05:44.831574 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:05:44.831608 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:05:44.831614 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:05:44.831617 | orchestrator | 2026-03-08 01:05:44.831621 | orchestrator | 2026-03-08 01:05:44.831625 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:05:44.831629 | orchestrator | Sunday 08 March 2026 01:05:16 +0000 (0:00:05.883) 0:00:34.362 ********** 2026-03-08 01:05:44.831633 | orchestrator | =============================================================================== 2026-03-08 01:05:44.831637 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.35s 2026-03-08 01:05:44.831640 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.73s 2026-03-08 01:05:44.831644 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.88s 2026-03-08 01:05:44.831648 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.23s 2026-03-08 01:05:44.831652 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.34s 2026-03-08 01:05:44.831655 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.19s 2026-03-08 01:05:44.831659 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.36s 2026-03-08 01:05:44.831663 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.93s 2026-03-08 01:05:44.831667 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.86s 2026-03-08 01:05:44.831670 | orchestrator | 2026-03-08 01:05:44.831674 | orchestrator | 2026-03-08 01:05:44.831678 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-08 01:05:44.831681 | orchestrator | 2026-03-08 01:05:44.831685 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-08 01:05:44.831689 | orchestrator | Sunday 08 March 2026 00:59:57 +0000 (0:00:00.066) 0:00:00.066 ********** 2026-03-08 01:05:44.831699 | orchestrator | changed: [localhost] 2026-03-08 01:05:44.831703 | orchestrator | 2026-03-08 01:05:44.831707 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-08 01:05:44.831711 | orchestrator | Sunday 08 March 2026 00:59:58 +0000 (0:00:00.860) 0:00:00.926 ********** 2026-03-08 01:05:44.831715 | orchestrator | 2026-03-08 01:05:44.831718 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-08 01:05:44.831722 | orchestrator | 2026-03-08 01:05:44.831726 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-08 01:05:44.831730 | orchestrator | 2026-03-08 01:05:44.831733 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-08 01:05:44.831737 | orchestrator | 2026-03-08 01:05:44.831741 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-08 01:05:44.831745 | orchestrator | 2026-03-08 01:05:44.831748 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-08 01:05:44.831752 | orchestrator | 2026-03-08 01:05:44.831756 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-08 01:05:44.831760 | orchestrator | 2026-03-08 01:05:44.831764 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-08 01:05:44.831767 | orchestrator | changed: [localhost] 2026-03-08 01:05:44.831771 | orchestrator | 2026-03-08 01:05:44.831775 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-03-08 01:05:44.831779 | orchestrator | Sunday 08 March 2026 01:05:28 +0000 (0:05:29.540) 0:05:30.466 ********** 2026-03-08 01:05:44.831782 | orchestrator | changed: [localhost] 2026-03-08 01:05:44.831786 | orchestrator | 2026-03-08 01:05:44.831790 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:05:44.831808 | orchestrator | 2026-03-08 01:05:44.831812 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:05:44.831816 | orchestrator | Sunday 08 March 2026 01:05:41 +0000 (0:00:12.976) 0:05:43.442 ********** 2026-03-08 01:05:44.831819 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:05:44.831823 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:05:44.831827 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:05:44.831830 | orchestrator | 2026-03-08 01:05:44.831834 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:05:44.831838 | orchestrator | Sunday 08 March 2026 01:05:41 +0000 (0:00:00.326) 0:05:43.769 ********** 2026-03-08 01:05:44.831842 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-03-08 01:05:44.831845 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-03-08 01:05:44.831849 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-03-08 01:05:44.831918 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-03-08 01:05:44.831924 | orchestrator | 2026-03-08 01:05:44.831929 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-03-08 01:05:44.831933 | orchestrator | skipping: no hosts matched 2026-03-08 01:05:44.831938 | orchestrator | 2026-03-08 01:05:44.831975 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:05:44.831981 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:05:44.831986 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:05:44.831991 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:05:44.831995 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:05:44.832000 | orchestrator | 2026-03-08 01:05:44.832004 | orchestrator | 2026-03-08 01:05:44.832009 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:05:44.832024 | orchestrator | Sunday 08 March 2026 01:05:42 +0000 (0:00:00.657) 0:05:44.427 ********** 2026-03-08 01:05:44.832033 | orchestrator | =============================================================================== 2026-03-08 01:05:44.832037 | orchestrator | Download ironic-agent initramfs --------------------------------------- 329.54s 2026-03-08 01:05:44.832041 | orchestrator | Download ironic-agent kernel ------------------------------------------- 12.98s 2026-03-08 01:05:44.832045 | orchestrator | Ensure the destination directory exists --------------------------------- 0.86s 2026-03-08 01:05:44.832048 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.66s 2026-03-08 01:05:44.832052 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-03-08 01:05:44.832056 | orchestrator | 2026-03-08 01:05:44 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:05:44.832060 | orchestrator | 2026-03-08 01:05:44 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:05:44.832064 | orchestrator | 2026-03-08 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:47.863391 | orchestrator | 2026-03-08 01:05:47 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:05:47.863480 | orchestrator | 2026-03-08 01:05:47 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:05:47.864089 | orchestrator | 2026-03-08 01:05:47 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:05:47.864811 | orchestrator | 2026-03-08 01:05:47 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:05:47.864862 | orchestrator | 2026-03-08 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:50.894938 | orchestrator | 2026-03-08 01:05:50 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:05:50.895668 | orchestrator | 2026-03-08 01:05:50 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:05:50.896388 | orchestrator | 2026-03-08 01:05:50 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:05:50.898908 | orchestrator | 2026-03-08 01:05:50 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:05:50.900905 | orchestrator | 2026-03-08 01:05:50 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:53.934757 | orchestrator | 2026-03-08 01:05:53 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:05:53.936300 | orchestrator | 2026-03-08 01:05:53 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:05:53.937831 | orchestrator | 2026-03-08 01:05:53 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:05:53.938257 | orchestrator | 2026-03-08 01:05:53 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:05:53.938321 | orchestrator | 2026-03-08 01:05:53 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:56.958738 | orchestrator | 2026-03-08 01:05:56 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:05:56.958984 | orchestrator | 2026-03-08 01:05:56 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:05:56.960642 | orchestrator | 2026-03-08 01:05:56 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:05:56.961092 | orchestrator | 2026-03-08 01:05:56 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:05:56.961131 | orchestrator | 2026-03-08 01:05:56 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:59.998293 | orchestrator | 2026-03-08 01:06:00 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:05:59.999390 | orchestrator | 2026-03-08 01:06:00 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:05:59.999535 | orchestrator | 2026-03-08 01:06:00 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:06:00.000431 | orchestrator | 2026-03-08 01:06:00 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:06:00.000483 | orchestrator | 2026-03-08 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:03.021120 | orchestrator | 2026-03-08 01:06:03 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:06:03.022259 | orchestrator | 2026-03-08 01:06:03 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:06:03.022443 | orchestrator | 2026-03-08 01:06:03 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:06:03.024372 | orchestrator | 2026-03-08 01:06:03 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:06:03.024435 | orchestrator | 2026-03-08 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:06.103493 | orchestrator | 2026-03-08 01:06:06 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:06:06.105057 | orchestrator | 2026-03-08 01:06:06 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:06:06.105989 | orchestrator | 2026-03-08 01:06:06 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:06:06.108205 | orchestrator | 2026-03-08 01:06:06 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:06:06.108233 | orchestrator | 2026-03-08 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:09.144017 | orchestrator | 2026-03-08 01:06:09 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:06:09.144766 | orchestrator | 2026-03-08 01:06:09 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:06:09.145063 | orchestrator | 2026-03-08 01:06:09 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:06:09.147006 | orchestrator | 2026-03-08 01:06:09 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:06:09.147051 | orchestrator | 2026-03-08 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:12.187737 | orchestrator | 2026-03-08 01:06:12 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:06:12.187934 | orchestrator | 2026-03-08 01:06:12 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:06:12.188569 | orchestrator | 2026-03-08 01:06:12 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:06:12.189179 | orchestrator | 2026-03-08 01:06:12 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:06:12.189261 | orchestrator | 2026-03-08 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:15.239379 | orchestrator | 2026-03-08 01:06:15 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:06:15.239926 | orchestrator | 2026-03-08 01:06:15 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:06:15.242122 | orchestrator | 2026-03-08 01:06:15 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:06:15.242743 | orchestrator | 2026-03-08 01:06:15 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:06:15.242783 | orchestrator | 2026-03-08 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:18.285894 | orchestrator | 2026-03-08 01:06:18 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:06:18.285993 | orchestrator | 2026-03-08 01:06:18 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:06:18.288257 | orchestrator | 2026-03-08 01:06:18 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:06:18.288315 | orchestrator | 2026-03-08 01:06:18 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:06:18.288322 | orchestrator | 2026-03-08 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:21.399007 | orchestrator | 2026-03-08 01:06:21 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:06:21.399087 | orchestrator | 2026-03-08 01:06:21 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:06:21.400131 | orchestrator | 2026-03-08 01:06:21 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:06:21.400784 | orchestrator | 2026-03-08 01:06:21 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:06:21.400845 | orchestrator | 2026-03-08 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:24.449112 | orchestrator | 2026-03-08 01:06:24 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:06:24.449341 | orchestrator | 2026-03-08 01:06:24 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:06:24.450267 | orchestrator | 2026-03-08 01:06:24 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:06:24.452055 | orchestrator | 2026-03-08 01:06:24 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:06:24.452101 | orchestrator | 2026-03-08 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:27.480884 | orchestrator | 2026-03-08 01:06:27 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:06:27.481288 | orchestrator | 2026-03-08 01:06:27 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:06:27.482270 | orchestrator | 2026-03-08 01:06:27 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:06:27.483259 | orchestrator | 2026-03-08 01:06:27 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:06:27.483749 | orchestrator | 2026-03-08 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:30.609851 | orchestrator | 2026-03-08 01:06:30 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:06:30.609908 | orchestrator | 2026-03-08 01:06:30 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state STARTED 2026-03-08 01:06:30.609916 | orchestrator | 2026-03-08 01:06:30 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:06:30.609924 | orchestrator | 2026-03-08 01:06:30 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:06:30.609931 | orchestrator | 2026-03-08 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:33.641105 | orchestrator | 2026-03-08 01:06:33 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:06:33.643934 | orchestrator | 2026-03-08 01:06:33 | INFO  | Task e6c424f5-6e0e-460b-af64-4ee0dc1b9ad7 is in state SUCCESS 2026-03-08 01:06:33.646796 | orchestrator | 2026-03-08 01:06:33.646863 | orchestrator | 2026-03-08 01:06:33.646869 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:06:33.646875 | orchestrator | 2026-03-08 01:06:33.646879 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:06:33.646883 | orchestrator | Sunday 08 March 2026 01:03:14 +0000 (0:00:00.287) 0:00:00.287 ********** 2026-03-08 01:06:33.646887 | orchestrator | ok: [testbed-manager] 2026-03-08 01:06:33.646893 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:06:33.646897 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:06:33.646901 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:06:33.646905 | orchestrator | ok: [testbed-node-3] 2026-03-08 01:06:33.646908 | orchestrator | ok: [testbed-node-4] 2026-03-08 01:06:33.646912 | orchestrator | ok: [testbed-node-5] 2026-03-08 01:06:33.646916 | orchestrator | 2026-03-08 01:06:33.646920 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:06:33.646924 | orchestrator | Sunday 08 March 2026 01:03:15 +0000 (0:00:00.889) 0:00:01.177 ********** 2026-03-08 01:06:33.646928 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-08 01:06:33.646932 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-08 01:06:33.646936 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-08 01:06:33.646940 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-08 01:06:33.646944 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-08 01:06:33.646947 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-08 01:06:33.646951 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-08 01:06:33.646955 | orchestrator | 2026-03-08 01:06:33.646959 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-08 01:06:33.646962 | orchestrator | 2026-03-08 01:06:33.646966 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-08 01:06:33.646970 | orchestrator | Sunday 08 March 2026 01:03:16 +0000 (0:00:00.704) 0:00:01.881 ********** 2026-03-08 01:06:33.646975 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 01:06:33.646980 | orchestrator | 2026-03-08 01:06:33.646984 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-08 01:06:33.646998 | orchestrator | Sunday 08 March 2026 01:03:17 +0000 (0:00:01.628) 0:00:03.510 ********** 2026-03-08 01:06:33.647004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:33.647010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:33.647015 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-08 01:06:33.647116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:33.647254 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:33.647267 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:33.647274 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:33.647287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.647294 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.647300 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:33.647321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.647333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.647340 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.647345 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.647352 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.647357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.647566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.647588 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.647593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.647605 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.647612 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-08 01:06:33.647621 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.647626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.647631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.647645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.647650 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.647659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.647664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.647669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.647673 | orchestrator | 2026-03-08 01:06:33.647679 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-08 01:06:33.647684 | orchestrator | Sunday 08 March 2026 01:03:20 +0000 (0:00:02.956) 0:00:06.466 ********** 2026-03-08 01:06:33.647691 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 01:06:33.647696 | orchestrator | 2026-03-08 01:06:33.647701 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-08 01:06:33.647706 | orchestrator | Sunday 08 March 2026 01:03:22 +0000 (0:00:01.370) 0:00:07.837 ********** 2026-03-08 01:06:33.647710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:33.647718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:33.647724 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-08 01:06:33.647732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:33.647736 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:33.647741 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:33.647748 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:33.647752 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:33.647760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.647764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.647768 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.647776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.647780 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.647784 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.647790 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.647797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.647801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.647805 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.647809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.647818 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.647825 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.647836 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-08 01:06:33.647848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.647855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.647862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.647873 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.647879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.647887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.647896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.647906 | orchestrator | 2026-03-08 01:06:33.647914 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-08 01:06:33.647921 | orchestrator | Sunday 08 March 2026 01:03:27 +0000 (0:00:05.709) 0:00:13.547 ********** 2026-03-08 01:06:33.647928 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-08 01:06:33.647935 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 01:06:33.647942 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 01:06:33.647954 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-08 01:06:33.647961 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:33.647975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 01:06:33.647979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:33.647983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:33.647987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 01:06:33.647991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:33.647995 | orchestrator | skipping: [testbed-manager] 2026-03-08 01:06:33.648000 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:06:33.648007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 01:06:33.648011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:33.648019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:33.648025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 01:06:33.648029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 01:06:33.648033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:33.648037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:33.648041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:33.648045 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:06:33.648051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 01:06:33.648058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:33.648062 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:06:33.648066 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 01:06:33.648073 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 01:06:33.648077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-08 01:06:33.648081 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:06:33.648085 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 01:06:33.648089 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 01:06:33.648097 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-08 01:06:33.648104 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:06:33.648108 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 01:06:33.648112 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 01:06:33.648119 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-08 01:06:33.648123 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:06:33.648127 | orchestrator | 2026-03-08 01:06:33.648131 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-08 01:06:33.648134 | orchestrator | Sunday 08 March 2026 01:03:29 +0000 (0:00:01.683) 0:00:15.230 ********** 2026-03-08 01:06:33.648138 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-08 01:06:33.648142 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 01:06:33.648146 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 01:06:33.648158 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-08 01:06:33.648162 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:33.648194 | orchestrator | skipping: [testbed-manager] 2026-03-08 01:06:33.648201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 01:06:33.648205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:33.648209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:33.648213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 01:06:33.648219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:33.648227 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:06:33.648230 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 01:06:33.648234 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 01:06:33.648241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-08 01:06:33.648245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 01:06:33.648249 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:06:33.648252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:33.648256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:33.648260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 01:06:33.648270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:33.648274 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:06:33.648278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 01:06:33.648282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:33.648288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:33.648293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 01:06:33.648296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:33.648300 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 01:06:33.648307 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:06:33.648311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 01:06:33.653634 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-08 01:06:33.653698 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:06:33.653705 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 01:06:33.653723 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 01:06:33.653728 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-08 01:06:33.653732 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:06:33.653736 | orchestrator | 2026-03-08 01:06:33.653741 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-08 01:06:33.653746 | orchestrator | Sunday 08 March 2026 01:03:31 +0000 (0:00:02.197) 0:00:17.428 ********** 2026-03-08 01:06:33.653751 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-08 01:06:33.653767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:33.653781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:33.653785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:33.653789 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:33.653796 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:33.653800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.653804 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:33.653808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.653815 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.653823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.653836 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.653840 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:33.653853 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.653857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.653862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.653869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.653873 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.653880 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.653884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.653888 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.653895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.653900 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-08 01:06:33.653908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.653914 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.653918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.653922 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.653926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.653932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.653936 | orchestrator | 2026-03-08 01:06:33.653943 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-08 01:06:33.653947 | orchestrator | Sunday 08 March 2026 01:03:39 +0000 (0:00:07.256) 0:00:24.684 ********** 2026-03-08 01:06:33.653951 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-08 01:06:33.653955 | orchestrator | 2026-03-08 01:06:33.653959 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-08 01:06:33.653963 | orchestrator | Sunday 08 March 2026 01:03:40 +0000 (0:00:01.709) 0:00:26.393 ********** 2026-03-08 01:06:33.653967 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088862, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0082088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.653972 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088862, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0082088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.653979 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088862, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0082088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.653983 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088862, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0082088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.653987 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088862, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0082088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.653994 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088862, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0082088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654009 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088862, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0082088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:33.654382 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088880, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0129123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654395 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088880, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0129123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654409 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088880, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0129123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654416 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088880, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0129123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654423 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088880, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0129123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654436 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088880, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0129123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654451 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088858, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0072637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654458 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088858, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0072637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654464 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088858, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0072637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654503 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088858, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0072637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654511 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088873, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0109124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654518 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088873, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0109124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654528 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088858, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0072637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654544 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088858, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0072637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654552 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088880, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0129123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:33.654558 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088873, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0109124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654570 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088873, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0109124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654576 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088873, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0109124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654582 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088855, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0063899, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654595 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088873, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0109124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654602 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088855, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0063899, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654608 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088855, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0063899, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654615 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088855, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0063899, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654624 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088855, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0063899, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654631 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088858, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0072637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:33.654637 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088863, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.008378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654652 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088855, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0063899, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654659 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088863, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.008378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654665 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088863, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.008378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654672 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088863, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.008378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654679 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088863, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.008378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654690 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088871, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0099123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654697 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088864, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0089123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654711 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088863, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.008378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654717 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088871, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0099123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654724 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088871, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0099123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654731 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088871, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0099123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654737 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088873, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0109124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:33.654748 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088871, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0099123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654755 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088861, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0079124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654779 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088871, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0099123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654786 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088864, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0089123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654792 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088864, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0089123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654799 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088864, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0089123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654806 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088864, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0089123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654815 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088864, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0089123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654830 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088879, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0125005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654840 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088855, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0063899, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:33.654847 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088861, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0079124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654854 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088861, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0079124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.654861 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088853, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0059123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655283 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088861, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0079124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655338 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088861, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0079124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655357 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088879, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0125005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655370 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088879, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0125005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655376 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088879, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0125005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655383 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088861, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0079124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655389 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088889, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0149124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655395 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088879, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0125005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655413 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088879, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0125005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655422 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088853, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0059123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655428 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088853, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0059123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655432 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088853, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0059123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655436 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088853, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0059123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655440 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088877, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0119123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655444 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088863, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.008378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:33.655463 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088853, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0059123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655468 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088889, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0149124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655472 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088889, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0149124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655545 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088889, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0149124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655554 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088856, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0065975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655560 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088889, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0149124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655567 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088889, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0149124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655583 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088877, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0119123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655590 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088877, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0119123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655597 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088854, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0059123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655607 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088877, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0119123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655614 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088870, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0097191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655621 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088877, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0119123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655628 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088871, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0099123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:33.655644 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088877, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0119123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655652 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088856, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0065975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655659 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088868, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0089123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655669 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088856, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0065975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655675 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088856, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0065975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655682 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088854, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0059123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655693 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088856, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0065975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655705 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088856, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0065975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655711 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088854, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0059123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655718 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088887, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0149124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655726 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:06:33.655736 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088854, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0059123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655743 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088854, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0059123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655750 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088870, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0097191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655760 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088854, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0059123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655772 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088870, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0097191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655779 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088870, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0097191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655787 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088864, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0089123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:33.655797 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088870, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0097191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655805 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088868, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0089123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655812 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088868, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0089123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655824 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088870, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0097191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655833 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088868, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0089123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655837 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088887, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0149124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655841 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:06:33.655845 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088887, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0149124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655851 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088868, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0089123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655855 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:06:33.655859 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088887, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0149124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655867 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:06:33.655871 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088868, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0089123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655874 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088887, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0149124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655878 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:06:33.655886 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088887, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0149124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:33.655890 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:06:33.655894 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088861, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0079124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:33.655900 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088879, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0125005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:33.655904 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088853, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0059123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:33.655908 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088889, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0149124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:33.655917 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088877, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0119123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:33.655921 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088856, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0065975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:33.655927 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088854, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0059123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:33.655931 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088870, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0097191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:33.655935 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088868, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0089123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:33.655944 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088887, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0149124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:33.655952 | orchestrator | 2026-03-08 01:06:33.655956 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-08 01:06:33.655961 | orchestrator | Sunday 08 March 2026 01:04:07 +0000 (0:00:26.467) 0:00:52.861 ********** 2026-03-08 01:06:33.655965 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-08 01:06:33.655969 | orchestrator | 2026-03-08 01:06:33.655973 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-08 01:06:33.655976 | orchestrator | Sunday 08 March 2026 01:04:08 +0000 (0:00:00.796) 0:00:53.658 ********** 2026-03-08 01:06:33.655980 | orchestrator | [WARNING]: Skipped 2026-03-08 01:06:33.655985 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-08 01:06:33.655989 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-08 01:06:33.655993 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-08 01:06:33.655997 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-08 01:06:33.656001 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-08 01:06:33.656005 | orchestrator | [WARNING]: Skipped 2026-03-08 01:06:33.656008 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-08 01:06:33.656012 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-08 01:06:33.656016 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-08 01:06:33.656020 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-08 01:06:33.656024 | orchestrator | [WARNING]: Skipped 2026-03-08 01:06:33.656027 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-08 01:06:33.656031 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-08 01:06:33.656035 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-08 01:06:33.656038 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-08 01:06:33.656042 | orchestrator | [WARNING]: Skipped 2026-03-08 01:06:33.656046 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-08 01:06:33.656050 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-08 01:06:33.656053 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-08 01:06:33.656057 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-08 01:06:33.656061 | orchestrator | [WARNING]: Skipped 2026-03-08 01:06:33.656064 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-08 01:06:33.656068 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-08 01:06:33.656075 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-08 01:06:33.656079 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-08 01:06:33.656083 | orchestrator | [WARNING]: Skipped 2026-03-08 01:06:33.656087 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-08 01:06:33.656090 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-08 01:06:33.656094 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-08 01:06:33.656098 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-08 01:06:33.656102 | orchestrator | [WARNING]: Skipped 2026-03-08 01:06:33.656105 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-08 01:06:33.656109 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-08 01:06:33.656113 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-08 01:06:33.656117 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-08 01:06:33.656120 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 01:06:33.656124 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-08 01:06:33.656128 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-08 01:06:33.656134 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-08 01:06:33.656138 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-08 01:06:33.656142 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-08 01:06:33.656146 | orchestrator | 2026-03-08 01:06:33.656149 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-08 01:06:33.656153 | orchestrator | Sunday 08 March 2026 01:04:09 +0000 (0:00:01.678) 0:00:55.337 ********** 2026-03-08 01:06:33.656157 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-08 01:06:33.656161 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-08 01:06:33.656165 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:06:33.656168 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:06:33.656172 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-08 01:06:33.656178 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:06:33.656182 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-08 01:06:33.656186 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:06:33.656189 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-08 01:06:33.656193 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:06:33.656198 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-08 01:06:33.656204 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:06:33.656210 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-08 01:06:33.656217 | orchestrator | 2026-03-08 01:06:33.656223 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-08 01:06:33.656230 | orchestrator | Sunday 08 March 2026 01:04:25 +0000 (0:00:16.143) 0:01:11.480 ********** 2026-03-08 01:06:33.656236 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-08 01:06:33.656243 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:06:33.656249 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-08 01:06:33.656256 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:06:33.656261 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-08 01:06:33.656265 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:06:33.656268 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-08 01:06:33.656272 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:06:33.656276 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-08 01:06:33.656280 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:06:33.656283 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-08 01:06:33.656287 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:06:33.656291 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-08 01:06:33.656294 | orchestrator | 2026-03-08 01:06:33.656298 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-08 01:06:33.656302 | orchestrator | Sunday 08 March 2026 01:04:28 +0000 (0:00:02.701) 0:01:14.182 ********** 2026-03-08 01:06:33.656306 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-08 01:06:33.656310 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-08 01:06:33.656314 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-08 01:06:33.656322 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:06:33.656325 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:06:33.656329 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:06:33.656338 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prom2026-03-08 01:06:33 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:06:33.656344 | orchestrator | 2026-03-08 01:06:33 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:06:33.656348 | orchestrator | 2026-03-08 01:06:33 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:06:33.656351 | orchestrator | 2026-03-08 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:33.656355 | orchestrator | etheus-alertmanager.yml)  2026-03-08 01:06:33.656359 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:06:33.656363 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-08 01:06:33.656367 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:06:33.656371 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-08 01:06:33.656375 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:06:33.656378 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-08 01:06:33.656382 | orchestrator | 2026-03-08 01:06:33.656386 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-08 01:06:33.656390 | orchestrator | Sunday 08 March 2026 01:04:30 +0000 (0:00:01.988) 0:01:16.170 ********** 2026-03-08 01:06:33.656393 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-08 01:06:33.656397 | orchestrator | 2026-03-08 01:06:33.656401 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-08 01:06:33.656405 | orchestrator | Sunday 08 March 2026 01:04:31 +0000 (0:00:00.798) 0:01:16.968 ********** 2026-03-08 01:06:33.656408 | orchestrator | skipping: [testbed-manager] 2026-03-08 01:06:33.656412 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:06:33.656416 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:06:33.656420 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:06:33.656424 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:06:33.656430 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:06:33.656434 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:06:33.656438 | orchestrator | 2026-03-08 01:06:33.656442 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-08 01:06:33.656445 | orchestrator | Sunday 08 March 2026 01:04:32 +0000 (0:00:00.839) 0:01:17.808 ********** 2026-03-08 01:06:33.656449 | orchestrator | skipping: [testbed-manager] 2026-03-08 01:06:33.656453 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:06:33.656457 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:06:33.656461 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:06:33.656465 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:06:33.656469 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:06:33.656472 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:06:33.656523 | orchestrator | 2026-03-08 01:06:33.656528 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-08 01:06:33.656532 | orchestrator | Sunday 08 March 2026 01:04:34 +0000 (0:00:02.588) 0:01:20.396 ********** 2026-03-08 01:06:33.656536 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-08 01:06:33.656539 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-08 01:06:33.656543 | orchestrator | skipping: [testbed-manager] 2026-03-08 01:06:33.656552 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-08 01:06:33.656556 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:06:33.656559 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:06:33.656563 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-08 01:06:33.656567 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:06:33.656570 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-08 01:06:33.656574 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:06:33.656578 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-08 01:06:33.656582 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:06:33.656585 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-08 01:06:33.656589 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:06:33.656593 | orchestrator | 2026-03-08 01:06:33.656596 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-08 01:06:33.656600 | orchestrator | Sunday 08 March 2026 01:04:36 +0000 (0:00:01.824) 0:01:22.220 ********** 2026-03-08 01:06:33.656604 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-08 01:06:33.656608 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:06:33.656611 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-08 01:06:33.656615 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:06:33.656619 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-08 01:06:33.656623 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:06:33.656626 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-08 01:06:33.656630 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:06:33.656637 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-08 01:06:33.656641 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-08 01:06:33.656646 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:06:33.656652 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-08 01:06:33.656657 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:06:33.656663 | orchestrator | 2026-03-08 01:06:33.656672 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-08 01:06:33.656679 | orchestrator | Sunday 08 March 2026 01:04:38 +0000 (0:00:01.738) 0:01:23.959 ********** 2026-03-08 01:06:33.656686 | orchestrator | [WARNING]: Skipped 2026-03-08 01:06:33.656691 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-08 01:06:33.656697 | orchestrator | due to this access issue: 2026-03-08 01:06:33.656703 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-08 01:06:33.656709 | orchestrator | not a directory 2026-03-08 01:06:33.656715 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-08 01:06:33.656721 | orchestrator | 2026-03-08 01:06:33.656727 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-08 01:06:33.656733 | orchestrator | Sunday 08 March 2026 01:04:39 +0000 (0:00:01.289) 0:01:25.249 ********** 2026-03-08 01:06:33.656740 | orchestrator | skipping: [testbed-manager] 2026-03-08 01:06:33.656746 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:06:33.656752 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:06:33.656758 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:06:33.656764 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:06:33.656772 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:06:33.656776 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:06:33.656780 | orchestrator | 2026-03-08 01:06:33.656784 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-08 01:06:33.656787 | orchestrator | Sunday 08 March 2026 01:04:40 +0000 (0:00:01.046) 0:01:26.295 ********** 2026-03-08 01:06:33.656791 | orchestrator | skipping: [testbed-manager] 2026-03-08 01:06:33.656795 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:06:33.656799 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:06:33.656802 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:06:33.656809 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:06:33.656813 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:06:33.656817 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:06:33.656821 | orchestrator | 2026-03-08 01:06:33.656824 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-08 01:06:33.656828 | orchestrator | Sunday 08 March 2026 01:04:41 +0000 (0:00:00.889) 0:01:27.184 ********** 2026-03-08 01:06:33.656833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:33.656839 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-08 01:06:33.656844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:33.656853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:33.656857 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:33.656864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.656871 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:33.656875 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:33.656879 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:33.656883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.656887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.656895 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.656899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.656906 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.656912 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.656916 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.656920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.656924 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.656928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.656935 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.656942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.656946 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.656953 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-08 01:06:33.656957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.656961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:33.656968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.656977 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.656981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.656985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:33.656989 | orchestrator | 2026-03-08 01:06:33.656995 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-08 01:06:33.656999 | orchestrator | Sunday 08 March 2026 01:04:45 +0000 (0:00:03.687) 0:01:30.872 ********** 2026-03-08 01:06:33.657003 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-08 01:06:33.657007 | orchestrator | skipping: [testbed-manager] 2026-03-08 01:06:33.657011 | orchestrator | 2026-03-08 01:06:33.657015 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-08 01:06:33.657018 | orchestrator | Sunday 08 March 2026 01:04:46 +0000 (0:00:01.097) 0:01:31.969 ********** 2026-03-08 01:06:33.657022 | orchestrator | 2026-03-08 01:06:33.657026 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-08 01:06:33.657030 | orchestrator | Sunday 08 March 2026 01:04:46 +0000 (0:00:00.086) 0:01:32.056 ********** 2026-03-08 01:06:33.657034 | orchestrator | 2026-03-08 01:06:33.657037 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-08 01:06:33.657041 | orchestrator | Sunday 08 March 2026 01:04:46 +0000 (0:00:00.079) 0:01:32.136 ********** 2026-03-08 01:06:33.657045 | orchestrator | 2026-03-08 01:06:33.657049 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-08 01:06:33.657053 | orchestrator | Sunday 08 March 2026 01:04:46 +0000 (0:00:00.071) 0:01:32.207 ********** 2026-03-08 01:06:33.657056 | orchestrator | 2026-03-08 01:06:33.657060 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-08 01:06:33.657064 | orchestrator | Sunday 08 March 2026 01:04:46 +0000 (0:00:00.265) 0:01:32.472 ********** 2026-03-08 01:06:33.657068 | orchestrator | 2026-03-08 01:06:33.657071 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-08 01:06:33.657075 | orchestrator | Sunday 08 March 2026 01:04:46 +0000 (0:00:00.066) 0:01:32.539 ********** 2026-03-08 01:06:33.657079 | orchestrator | 2026-03-08 01:06:33.657084 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-08 01:06:33.657089 | orchestrator | Sunday 08 March 2026 01:04:46 +0000 (0:00:00.066) 0:01:32.605 ********** 2026-03-08 01:06:33.657095 | orchestrator | 2026-03-08 01:06:33.657105 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-08 01:06:33.657112 | orchestrator | Sunday 08 March 2026 01:04:47 +0000 (0:00:00.086) 0:01:32.692 ********** 2026-03-08 01:06:33.657122 | orchestrator | changed: [testbed-manager] 2026-03-08 01:06:33.657128 | orchestrator | 2026-03-08 01:06:33.657134 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-08 01:06:33.657140 | orchestrator | Sunday 08 March 2026 01:05:06 +0000 (0:00:19.666) 0:01:52.359 ********** 2026-03-08 01:06:33.657146 | orchestrator | changed: [testbed-manager] 2026-03-08 01:06:33.657152 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:06:33.657158 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:06:33.657163 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:06:33.657169 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:06:33.657174 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:06:33.657180 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:06:33.657185 | orchestrator | 2026-03-08 01:06:33.657191 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-08 01:06:33.657197 | orchestrator | Sunday 08 March 2026 01:05:21 +0000 (0:00:14.917) 0:02:07.276 ********** 2026-03-08 01:06:33.657203 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:06:33.657209 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:06:33.657216 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:06:33.657222 | orchestrator | 2026-03-08 01:06:33.657229 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-08 01:06:33.657235 | orchestrator | Sunday 08 March 2026 01:05:33 +0000 (0:00:11.343) 0:02:18.620 ********** 2026-03-08 01:06:33.657245 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:06:33.657252 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:06:33.657258 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:06:33.657264 | orchestrator | 2026-03-08 01:06:33.657270 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-08 01:06:33.657276 | orchestrator | Sunday 08 March 2026 01:05:43 +0000 (0:00:10.527) 0:02:29.148 ********** 2026-03-08 01:06:33.657281 | orchestrator | changed: [testbed-manager] 2026-03-08 01:06:33.657287 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:06:33.657293 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:06:33.657299 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:06:33.657306 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:06:33.657312 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:06:33.657318 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:06:33.657325 | orchestrator | 2026-03-08 01:06:33.657329 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-08 01:06:33.657333 | orchestrator | Sunday 08 March 2026 01:06:00 +0000 (0:00:16.616) 0:02:45.764 ********** 2026-03-08 01:06:33.657336 | orchestrator | changed: [testbed-manager] 2026-03-08 01:06:33.657340 | orchestrator | 2026-03-08 01:06:33.657344 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-08 01:06:33.657348 | orchestrator | Sunday 08 March 2026 01:06:07 +0000 (0:00:07.086) 0:02:52.851 ********** 2026-03-08 01:06:33.657352 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:06:33.657356 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:06:33.657359 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:06:33.657363 | orchestrator | 2026-03-08 01:06:33.657367 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-08 01:06:33.657371 | orchestrator | Sunday 08 March 2026 01:06:19 +0000 (0:00:11.828) 0:03:04.680 ********** 2026-03-08 01:06:33.657374 | orchestrator | changed: [testbed-manager] 2026-03-08 01:06:33.657378 | orchestrator | 2026-03-08 01:06:33.657382 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-08 01:06:33.657386 | orchestrator | Sunday 08 March 2026 01:06:24 +0000 (0:00:05.194) 0:03:09.875 ********** 2026-03-08 01:06:33.657390 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:06:33.657393 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:06:33.657397 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:06:33.657401 | orchestrator | 2026-03-08 01:06:33.657405 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:06:33.657413 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-08 01:06:33.657421 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-08 01:06:33.657425 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-08 01:06:33.657429 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-08 01:06:33.657433 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-08 01:06:33.657437 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-08 01:06:33.657440 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-08 01:06:33.657444 | orchestrator | 2026-03-08 01:06:33.657448 | orchestrator | 2026-03-08 01:06:33.657452 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:06:33.657456 | orchestrator | Sunday 08 March 2026 01:06:31 +0000 (0:00:07.486) 0:03:17.361 ********** 2026-03-08 01:06:33.657460 | orchestrator | =============================================================================== 2026-03-08 01:06:33.657463 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 26.47s 2026-03-08 01:06:33.657467 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 19.67s 2026-03-08 01:06:33.657471 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 16.62s 2026-03-08 01:06:33.657475 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.14s 2026-03-08 01:06:33.657497 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.92s 2026-03-08 01:06:33.657501 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 11.83s 2026-03-08 01:06:33.657505 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 11.34s 2026-03-08 01:06:33.657508 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.53s 2026-03-08 01:06:33.657512 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 7.49s 2026-03-08 01:06:33.657516 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.26s 2026-03-08 01:06:33.657519 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.09s 2026-03-08 01:06:33.657523 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.71s 2026-03-08 01:06:33.657527 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.19s 2026-03-08 01:06:33.657534 | orchestrator | prometheus : Check prometheus containers -------------------------------- 3.69s 2026-03-08 01:06:33.657544 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.96s 2026-03-08 01:06:33.657550 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.70s 2026-03-08 01:06:33.657557 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.59s 2026-03-08 01:06:33.657562 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.20s 2026-03-08 01:06:33.657568 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.99s 2026-03-08 01:06:33.657574 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 1.82s 2026-03-08 01:06:36.690058 | orchestrator | 2026-03-08 01:06:36 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:06:36.692299 | orchestrator | 2026-03-08 01:06:36 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:06:36.694465 | orchestrator | 2026-03-08 01:06:36 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:06:36.696584 | orchestrator | 2026-03-08 01:06:36 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:06:36.696683 | orchestrator | 2026-03-08 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:39.748992 | orchestrator | 2026-03-08 01:06:39 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:06:39.751059 | orchestrator | 2026-03-08 01:06:39 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:06:39.756431 | orchestrator | 2026-03-08 01:06:39 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:06:39.758658 | orchestrator | 2026-03-08 01:06:39 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:06:39.759014 | orchestrator | 2026-03-08 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:42.815909 | orchestrator | 2026-03-08 01:06:42 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:06:42.818230 | orchestrator | 2026-03-08 01:06:42 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:06:42.820724 | orchestrator | 2026-03-08 01:06:42 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:06:42.824656 | orchestrator | 2026-03-08 01:06:42 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:06:42.824731 | orchestrator | 2026-03-08 01:06:42 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:45.887826 | orchestrator | 2026-03-08 01:06:45 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:06:45.891352 | orchestrator | 2026-03-08 01:06:45 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:06:45.893672 | orchestrator | 2026-03-08 01:06:45 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:06:45.895803 | orchestrator | 2026-03-08 01:06:45 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:06:45.895860 | orchestrator | 2026-03-08 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:48.946101 | orchestrator | 2026-03-08 01:06:48 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:06:48.948147 | orchestrator | 2026-03-08 01:06:48 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:06:48.950160 | orchestrator | 2026-03-08 01:06:48 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:06:48.952176 | orchestrator | 2026-03-08 01:06:48 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:06:48.952860 | orchestrator | 2026-03-08 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:51.993881 | orchestrator | 2026-03-08 01:06:51 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:06:51.995975 | orchestrator | 2026-03-08 01:06:52 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:06:51.997059 | orchestrator | 2026-03-08 01:06:52 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:06:51.998088 | orchestrator | 2026-03-08 01:06:52 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:06:51.998120 | orchestrator | 2026-03-08 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:55.059728 | orchestrator | 2026-03-08 01:06:55 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:06:55.061181 | orchestrator | 2026-03-08 01:06:55 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:06:55.063216 | orchestrator | 2026-03-08 01:06:55 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:06:55.064527 | orchestrator | 2026-03-08 01:06:55 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:06:55.064563 | orchestrator | 2026-03-08 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:58.111154 | orchestrator | 2026-03-08 01:06:58 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:06:58.113153 | orchestrator | 2026-03-08 01:06:58 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:06:58.115464 | orchestrator | 2026-03-08 01:06:58 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:06:58.117409 | orchestrator | 2026-03-08 01:06:58 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:06:58.117460 | orchestrator | 2026-03-08 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:01.162119 | orchestrator | 2026-03-08 01:07:01 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:07:01.164132 | orchestrator | 2026-03-08 01:07:01 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:07:01.166390 | orchestrator | 2026-03-08 01:07:01 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:07:01.167875 | orchestrator | 2026-03-08 01:07:01 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:07:01.168522 | orchestrator | 2026-03-08 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:04.213740 | orchestrator | 2026-03-08 01:07:04 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:07:04.216157 | orchestrator | 2026-03-08 01:07:04 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:07:04.219010 | orchestrator | 2026-03-08 01:07:04 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:07:04.220397 | orchestrator | 2026-03-08 01:07:04 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:07:04.220460 | orchestrator | 2026-03-08 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:07.270807 | orchestrator | 2026-03-08 01:07:07 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:07:07.272800 | orchestrator | 2026-03-08 01:07:07 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:07:07.276533 | orchestrator | 2026-03-08 01:07:07 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:07:07.277909 | orchestrator | 2026-03-08 01:07:07 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:07:07.277951 | orchestrator | 2026-03-08 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:10.309605 | orchestrator | 2026-03-08 01:07:10 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:07:10.309697 | orchestrator | 2026-03-08 01:07:10 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:07:10.313173 | orchestrator | 2026-03-08 01:07:10 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:07:10.324782 | orchestrator | 2026-03-08 01:07:10 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:07:10.324895 | orchestrator | 2026-03-08 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:13.365758 | orchestrator | 2026-03-08 01:07:13 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:07:13.367046 | orchestrator | 2026-03-08 01:07:13 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:07:13.369881 | orchestrator | 2026-03-08 01:07:13 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:07:13.370638 | orchestrator | 2026-03-08 01:07:13 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:07:13.370772 | orchestrator | 2026-03-08 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:16.398837 | orchestrator | 2026-03-08 01:07:16 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:07:16.401117 | orchestrator | 2026-03-08 01:07:16 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:07:16.401678 | orchestrator | 2026-03-08 01:07:16 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:07:16.402103 | orchestrator | 2026-03-08 01:07:16 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:07:16.402173 | orchestrator | 2026-03-08 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:19.452203 | orchestrator | 2026-03-08 01:07:19 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:07:19.454745 | orchestrator | 2026-03-08 01:07:19 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:07:19.457544 | orchestrator | 2026-03-08 01:07:19 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:07:19.459534 | orchestrator | 2026-03-08 01:07:19 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:07:19.459928 | orchestrator | 2026-03-08 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:22.499804 | orchestrator | 2026-03-08 01:07:22 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:07:22.500796 | orchestrator | 2026-03-08 01:07:22 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:07:22.503585 | orchestrator | 2026-03-08 01:07:22 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:07:22.505208 | orchestrator | 2026-03-08 01:07:22 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:07:22.505255 | orchestrator | 2026-03-08 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:25.551751 | orchestrator | 2026-03-08 01:07:25 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:07:25.553305 | orchestrator | 2026-03-08 01:07:25 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:07:25.554724 | orchestrator | 2026-03-08 01:07:25 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:07:25.556147 | orchestrator | 2026-03-08 01:07:25 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:07:25.556226 | orchestrator | 2026-03-08 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:28.606063 | orchestrator | 2026-03-08 01:07:28 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:07:28.606351 | orchestrator | 2026-03-08 01:07:28 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:07:28.606971 | orchestrator | 2026-03-08 01:07:28 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:07:28.610389 | orchestrator | 2026-03-08 01:07:28 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:07:28.611340 | orchestrator | 2026-03-08 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:31.654540 | orchestrator | 2026-03-08 01:07:31 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:07:31.660626 | orchestrator | 2026-03-08 01:07:31 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:07:31.662394 | orchestrator | 2026-03-08 01:07:31 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:07:31.664081 | orchestrator | 2026-03-08 01:07:31 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:07:31.664118 | orchestrator | 2026-03-08 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:34.717866 | orchestrator | 2026-03-08 01:07:34 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:07:34.719815 | orchestrator | 2026-03-08 01:07:34 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:07:34.721698 | orchestrator | 2026-03-08 01:07:34 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:07:34.722917 | orchestrator | 2026-03-08 01:07:34 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:07:34.722942 | orchestrator | 2026-03-08 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:37.763458 | orchestrator | 2026-03-08 01:07:37 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:07:37.765756 | orchestrator | 2026-03-08 01:07:37 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:07:37.768086 | orchestrator | 2026-03-08 01:07:37 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:07:37.770772 | orchestrator | 2026-03-08 01:07:37 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:07:37.771538 | orchestrator | 2026-03-08 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:40.824865 | orchestrator | 2026-03-08 01:07:40 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:07:40.825203 | orchestrator | 2026-03-08 01:07:40 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:07:40.827951 | orchestrator | 2026-03-08 01:07:40 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:07:40.830195 | orchestrator | 2026-03-08 01:07:40 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:07:40.830405 | orchestrator | 2026-03-08 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:43.874842 | orchestrator | 2026-03-08 01:07:43 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state STARTED 2026-03-08 01:07:43.878092 | orchestrator | 2026-03-08 01:07:43 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:07:43.880894 | orchestrator | 2026-03-08 01:07:43 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:07:43.882786 | orchestrator | 2026-03-08 01:07:43 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:07:43.882919 | orchestrator | 2026-03-08 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:46.927873 | orchestrator | 2026-03-08 01:07:46.927942 | orchestrator | 2026-03-08 01:07:46.927949 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:07:46.927956 | orchestrator | 2026-03-08 01:07:46.927963 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:07:46.927985 | orchestrator | Sunday 08 March 2026 01:04:50 +0000 (0:00:00.381) 0:00:00.381 ********** 2026-03-08 01:07:46.927992 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:07:46.928000 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:07:46.928105 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:07:46.928111 | orchestrator | 2026-03-08 01:07:46.928115 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:07:46.928119 | orchestrator | Sunday 08 March 2026 01:04:51 +0000 (0:00:00.366) 0:00:00.747 ********** 2026-03-08 01:07:46.928123 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-08 01:07:46.928127 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-08 01:07:46.928131 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-08 01:07:46.928134 | orchestrator | 2026-03-08 01:07:46.928138 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-08 01:07:46.928142 | orchestrator | 2026-03-08 01:07:46.928146 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-08 01:07:46.928150 | orchestrator | Sunday 08 March 2026 01:04:51 +0000 (0:00:00.502) 0:00:01.250 ********** 2026-03-08 01:07:46.928153 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:07:46.928158 | orchestrator | 2026-03-08 01:07:46.928162 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-08 01:07:46.928166 | orchestrator | Sunday 08 March 2026 01:04:52 +0000 (0:00:00.671) 0:00:01.921 ********** 2026-03-08 01:07:46.928170 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-08 01:07:46.928173 | orchestrator | 2026-03-08 01:07:46.928177 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-08 01:07:46.928181 | orchestrator | Sunday 08 March 2026 01:04:55 +0000 (0:00:03.457) 0:00:05.379 ********** 2026-03-08 01:07:46.928185 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-08 01:07:46.928189 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-08 01:07:46.928193 | orchestrator | 2026-03-08 01:07:46.928197 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-08 01:07:46.928200 | orchestrator | Sunday 08 March 2026 01:05:02 +0000 (0:00:06.429) 0:00:11.809 ********** 2026-03-08 01:07:46.928204 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-08 01:07:46.928209 | orchestrator | 2026-03-08 01:07:46.928213 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-08 01:07:46.928217 | orchestrator | Sunday 08 March 2026 01:05:05 +0000 (0:00:03.385) 0:00:15.194 ********** 2026-03-08 01:07:46.928221 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-08 01:07:46.928225 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-08 01:07:46.928229 | orchestrator | 2026-03-08 01:07:46.928233 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-08 01:07:46.928236 | orchestrator | Sunday 08 March 2026 01:05:09 +0000 (0:00:04.262) 0:00:19.457 ********** 2026-03-08 01:07:46.928240 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-08 01:07:46.928245 | orchestrator | 2026-03-08 01:07:46.928248 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-08 01:07:46.928252 | orchestrator | Sunday 08 March 2026 01:05:13 +0000 (0:00:03.844) 0:00:23.302 ********** 2026-03-08 01:07:46.928256 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-08 01:07:46.928260 | orchestrator | 2026-03-08 01:07:46.928264 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-08 01:07:46.928285 | orchestrator | Sunday 08 March 2026 01:05:17 +0000 (0:00:04.095) 0:00:27.397 ********** 2026-03-08 01:07:46.928312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 01:07:46.928320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 01:07:46.928328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 01:07:46.928340 | orchestrator | 2026-03-08 01:07:46.928347 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-08 01:07:46.928353 | orchestrator | Sunday 08 March 2026 01:05:21 +0000 (0:00:03.336) 0:00:30.734 ********** 2026-03-08 01:07:46.928362 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:07:46.928368 | orchestrator | 2026-03-08 01:07:46.928381 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-08 01:07:46.928387 | orchestrator | Sunday 08 March 2026 01:05:21 +0000 (0:00:00.608) 0:00:31.343 ********** 2026-03-08 01:07:46.928393 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:07:46.928399 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:07:46.928409 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:07:46.928416 | orchestrator | 2026-03-08 01:07:46.928422 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-08 01:07:46.928428 | orchestrator | Sunday 08 March 2026 01:05:26 +0000 (0:00:04.884) 0:00:36.228 ********** 2026-03-08 01:07:46.928434 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-08 01:07:46.928507 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-08 01:07:46.928515 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-08 01:07:46.928520 | orchestrator | 2026-03-08 01:07:46.928526 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-08 01:07:46.928532 | orchestrator | Sunday 08 March 2026 01:05:27 +0000 (0:00:01.333) 0:00:37.561 ********** 2026-03-08 01:07:46.928538 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-08 01:07:46.928544 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-08 01:07:46.928550 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-08 01:07:46.928556 | orchestrator | 2026-03-08 01:07:46.928563 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-08 01:07:46.928569 | orchestrator | Sunday 08 March 2026 01:05:29 +0000 (0:00:01.172) 0:00:38.733 ********** 2026-03-08 01:07:46.928575 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:07:46.928581 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:07:46.928587 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:07:46.928593 | orchestrator | 2026-03-08 01:07:46.928599 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-08 01:07:46.928604 | orchestrator | Sunday 08 March 2026 01:05:29 +0000 (0:00:00.867) 0:00:39.602 ********** 2026-03-08 01:07:46.928610 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:46.928622 | orchestrator | 2026-03-08 01:07:46.928627 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-08 01:07:46.928633 | orchestrator | Sunday 08 March 2026 01:05:30 +0000 (0:00:00.132) 0:00:39.734 ********** 2026-03-08 01:07:46.928639 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:46.928645 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:46.928651 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:07:46.928717 | orchestrator | 2026-03-08 01:07:46.928726 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-08 01:07:46.928733 | orchestrator | Sunday 08 March 2026 01:05:30 +0000 (0:00:00.308) 0:00:40.043 ********** 2026-03-08 01:07:46.928739 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:07:46.928746 | orchestrator | 2026-03-08 01:07:46.928753 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-08 01:07:46.928760 | orchestrator | Sunday 08 March 2026 01:05:30 +0000 (0:00:00.556) 0:00:40.599 ********** 2026-03-08 01:07:46.928776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 01:07:46.928791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 01:07:46.928812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 01:07:46.928820 | orchestrator | 2026-03-08 01:07:46.928827 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-08 01:07:46.928834 | orchestrator | Sunday 08 March 2026 01:05:35 +0000 (0:00:04.178) 0:00:44.777 ********** 2026-03-08 01:07:46.928852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-08 01:07:46.928864 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:46.928870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-08 01:07:46.928877 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:46.928892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-08 01:07:46.928900 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:07:46.928907 | orchestrator | 2026-03-08 01:07:46.928914 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-08 01:07:46.928921 | orchestrator | Sunday 08 March 2026 01:05:37 +0000 (0:00:02.599) 0:00:47.376 ********** 2026-03-08 01:07:46.928927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-08 01:07:46.928939 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:46.928949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-08 01:07:46.928956 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:46.928968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-08 01:07:46.928980 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:07:46.928986 | orchestrator | 2026-03-08 01:07:46.928992 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-08 01:07:46.928999 | orchestrator | Sunday 08 March 2026 01:05:41 +0000 (0:00:03.436) 0:00:50.812 ********** 2026-03-08 01:07:46.929005 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:46.929011 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:07:46.929017 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:46.929023 | orchestrator | 2026-03-08 01:07:46.929030 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-08 01:07:46.929036 | orchestrator | Sunday 08 March 2026 01:05:45 +0000 (0:00:04.339) 0:00:55.151 ********** 2026-03-08 01:07:46.929042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 01:07:46.929060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 01:07:46.929073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 01:07:46.929081 | orchestrator | 2026-03-08 01:07:46.929087 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-08 01:07:46.929093 | orchestrator | Sunday 08 March 2026 01:05:52 +0000 (0:00:07.408) 0:01:02.559 ********** 2026-03-08 01:07:46.929101 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:07:46.929107 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:07:46.929113 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:07:46.929120 | orchestrator | 2026-03-08 01:07:46.929126 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-08 01:07:46.929131 | orchestrator | Sunday 08 March 2026 01:05:59 +0000 (0:00:06.527) 0:01:09.087 ********** 2026-03-08 01:07:46.929138 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:46.929144 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:46.929150 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:07:46.929157 | orchestrator | 2026-03-08 01:07:46.929163 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-08 01:07:46.929170 | orchestrator | Sunday 08 March 2026 01:06:03 +0000 (0:00:03.728) 0:01:12.815 ********** 2026-03-08 01:07:46.929184 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:46.929196 | orchestrator | skipping: [testbed-node2026-03-08 01:07:46 | INFO  | Task f5f9dd2b-671a-429b-af69-c924fe6532ab is in state SUCCESS 2026-03-08 01:07:46.929206 | orchestrator | -2] 2026-03-08 01:07:46.929212 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:46.929219 | orchestrator | 2026-03-08 01:07:46.929229 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-08 01:07:46.929238 | orchestrator | Sunday 08 March 2026 01:06:08 +0000 (0:00:05.712) 0:01:18.527 ********** 2026-03-08 01:07:46.929246 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:46.929254 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:07:46.929261 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:46.929269 | orchestrator | 2026-03-08 01:07:46.929276 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-08 01:07:46.929285 | orchestrator | Sunday 08 March 2026 01:06:13 +0000 (0:00:04.218) 0:01:22.746 ********** 2026-03-08 01:07:46.929292 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:46.929299 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:46.929306 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:07:46.929313 | orchestrator | 2026-03-08 01:07:46.929320 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-08 01:07:46.929327 | orchestrator | Sunday 08 March 2026 01:06:17 +0000 (0:00:04.254) 0:01:27.000 ********** 2026-03-08 01:07:46.929334 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:46.929343 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:46.929350 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:07:46.929356 | orchestrator | 2026-03-08 01:07:46.929363 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-08 01:07:46.929370 | orchestrator | Sunday 08 March 2026 01:06:17 +0000 (0:00:00.344) 0:01:27.345 ********** 2026-03-08 01:07:46.929377 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-08 01:07:46.929384 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-08 01:07:46.929391 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:46.929398 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:07:46.929404 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-08 01:07:46.929411 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:46.929417 | orchestrator | 2026-03-08 01:07:46.929424 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-08 01:07:46.929430 | orchestrator | Sunday 08 March 2026 01:06:21 +0000 (0:00:03.554) 0:01:30.899 ********** 2026-03-08 01:07:46.929436 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:07:46.929443 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:07:46.929449 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:07:46.929455 | orchestrator | 2026-03-08 01:07:46.929461 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-08 01:07:46.929467 | orchestrator | Sunday 08 March 2026 01:06:28 +0000 (0:00:06.792) 0:01:37.691 ********** 2026-03-08 01:07:46.929474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 01:07:46.929501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 01:07:46.929509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 01:07:46.929519 | orchestrator | 2026-03-08 01:07:46.929523 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-08 01:07:46.929527 | orchestrator | Sunday 08 March 2026 01:06:33 +0000 (0:00:04.996) 0:01:42.688 ********** 2026-03-08 01:07:46.929531 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:46.929534 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:46.929538 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:07:46.929542 | orchestrator | 2026-03-08 01:07:46.929545 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-08 01:07:46.929549 | orchestrator | Sunday 08 March 2026 01:06:33 +0000 (0:00:00.637) 0:01:43.325 ********** 2026-03-08 01:07:46.929553 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:07:46.929556 | orchestrator | 2026-03-08 01:07:46.929560 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-08 01:07:46.929564 | orchestrator | Sunday 08 March 2026 01:06:36 +0000 (0:00:02.529) 0:01:45.855 ********** 2026-03-08 01:07:46.929568 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:07:46.929571 | orchestrator | 2026-03-08 01:07:46.929575 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-08 01:07:46.929579 | orchestrator | Sunday 08 March 2026 01:06:38 +0000 (0:00:02.448) 0:01:48.303 ********** 2026-03-08 01:07:46.929583 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:07:46.929586 | orchestrator | 2026-03-08 01:07:46.929590 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-08 01:07:46.929594 | orchestrator | Sunday 08 March 2026 01:06:40 +0000 (0:00:02.256) 0:01:50.560 ********** 2026-03-08 01:07:46.929601 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:07:46.929605 | orchestrator | 2026-03-08 01:07:46.929609 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-08 01:07:46.929612 | orchestrator | Sunday 08 March 2026 01:07:10 +0000 (0:00:30.067) 0:02:20.628 ********** 2026-03-08 01:07:46.929616 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:07:46.929620 | orchestrator | 2026-03-08 01:07:46.929627 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-08 01:07:46.929630 | orchestrator | Sunday 08 March 2026 01:07:12 +0000 (0:00:01.808) 0:02:22.436 ********** 2026-03-08 01:07:46.929634 | orchestrator | 2026-03-08 01:07:46.929638 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-08 01:07:46.929641 | orchestrator | Sunday 08 March 2026 01:07:12 +0000 (0:00:00.226) 0:02:22.663 ********** 2026-03-08 01:07:46.929645 | orchestrator | 2026-03-08 01:07:46.929649 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-08 01:07:46.929652 | orchestrator | Sunday 08 March 2026 01:07:13 +0000 (0:00:00.065) 0:02:22.728 ********** 2026-03-08 01:07:46.929679 | orchestrator | 2026-03-08 01:07:46.929685 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-08 01:07:46.929691 | orchestrator | Sunday 08 March 2026 01:07:13 +0000 (0:00:00.061) 0:02:22.789 ********** 2026-03-08 01:07:46.929695 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:07:46.929699 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:07:46.929702 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:07:46.929706 | orchestrator | 2026-03-08 01:07:46.929710 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:07:46.929715 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-08 01:07:46.929720 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-08 01:07:46.929724 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-08 01:07:46.929734 | orchestrator | 2026-03-08 01:07:46.929738 | orchestrator | 2026-03-08 01:07:46.929742 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:07:46.929745 | orchestrator | Sunday 08 March 2026 01:07:43 +0000 (0:00:30.507) 0:02:53.297 ********** 2026-03-08 01:07:46.929749 | orchestrator | =============================================================================== 2026-03-08 01:07:46.929753 | orchestrator | glance : Restart glance-api container ---------------------------------- 30.51s 2026-03-08 01:07:46.929757 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 30.07s 2026-03-08 01:07:46.929760 | orchestrator | glance : Copying over config.json files for services -------------------- 7.41s 2026-03-08 01:07:46.929764 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 6.79s 2026-03-08 01:07:46.929768 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.53s 2026-03-08 01:07:46.929771 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.43s 2026-03-08 01:07:46.929775 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.71s 2026-03-08 01:07:46.929779 | orchestrator | glance : Check glance containers ---------------------------------------- 5.00s 2026-03-08 01:07:46.929782 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.88s 2026-03-08 01:07:46.929786 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.34s 2026-03-08 01:07:46.929790 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.27s 2026-03-08 01:07:46.929794 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.25s 2026-03-08 01:07:46.929797 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.22s 2026-03-08 01:07:46.929801 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.18s 2026-03-08 01:07:46.929805 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.10s 2026-03-08 01:07:46.929808 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.84s 2026-03-08 01:07:46.929812 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.73s 2026-03-08 01:07:46.929816 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.55s 2026-03-08 01:07:46.929820 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.46s 2026-03-08 01:07:46.929823 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.44s 2026-03-08 01:07:46.929827 | orchestrator | 2026-03-08 01:07:46 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:07:46.930374 | orchestrator | 2026-03-08 01:07:46 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:07:46.930399 | orchestrator | 2026-03-08 01:07:46 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:07:46.931146 | orchestrator | 2026-03-08 01:07:46 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:07:46.931676 | orchestrator | 2026-03-08 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:49.972026 | orchestrator | 2026-03-08 01:07:49 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:07:49.974420 | orchestrator | 2026-03-08 01:07:49 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:07:49.976880 | orchestrator | 2026-03-08 01:07:49 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:07:49.978531 | orchestrator | 2026-03-08 01:07:49 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:07:49.978584 | orchestrator | 2026-03-08 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:53.022642 | orchestrator | 2026-03-08 01:07:53 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:07:53.022836 | orchestrator | 2026-03-08 01:07:53 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:07:53.023560 | orchestrator | 2026-03-08 01:07:53 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:07:53.024254 | orchestrator | 2026-03-08 01:07:53 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:07:53.024289 | orchestrator | 2026-03-08 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:56.097574 | orchestrator | 2026-03-08 01:07:56 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:07:56.100149 | orchestrator | 2026-03-08 01:07:56 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:07:56.101276 | orchestrator | 2026-03-08 01:07:56 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:07:56.104269 | orchestrator | 2026-03-08 01:07:56 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:07:56.104326 | orchestrator | 2026-03-08 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:59.150434 | orchestrator | 2026-03-08 01:07:59 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:07:59.152093 | orchestrator | 2026-03-08 01:07:59 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:07:59.152807 | orchestrator | 2026-03-08 01:07:59 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:07:59.153954 | orchestrator | 2026-03-08 01:07:59 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:07:59.153994 | orchestrator | 2026-03-08 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:02.190511 | orchestrator | 2026-03-08 01:08:02 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:08:02.192537 | orchestrator | 2026-03-08 01:08:02 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:08:02.194444 | orchestrator | 2026-03-08 01:08:02 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:08:02.195626 | orchestrator | 2026-03-08 01:08:02 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:08:02.195843 | orchestrator | 2026-03-08 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:05.228094 | orchestrator | 2026-03-08 01:08:05 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:08:05.230081 | orchestrator | 2026-03-08 01:08:05 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:08:05.232264 | orchestrator | 2026-03-08 01:08:05 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:08:05.234424 | orchestrator | 2026-03-08 01:08:05 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:08:05.234468 | orchestrator | 2026-03-08 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:08.278584 | orchestrator | 2026-03-08 01:08:08 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:08:08.280213 | orchestrator | 2026-03-08 01:08:08 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:08:08.281320 | orchestrator | 2026-03-08 01:08:08 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:08:08.283170 | orchestrator | 2026-03-08 01:08:08 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:08:08.283211 | orchestrator | 2026-03-08 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:11.324735 | orchestrator | 2026-03-08 01:08:11 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:08:11.327901 | orchestrator | 2026-03-08 01:08:11 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:08:11.330489 | orchestrator | 2026-03-08 01:08:11 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:08:11.333226 | orchestrator | 2026-03-08 01:08:11 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:08:11.333362 | orchestrator | 2026-03-08 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:14.397807 | orchestrator | 2026-03-08 01:08:14 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:08:14.397961 | orchestrator | 2026-03-08 01:08:14 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:08:14.397973 | orchestrator | 2026-03-08 01:08:14 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:08:14.397980 | orchestrator | 2026-03-08 01:08:14 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:08:14.397988 | orchestrator | 2026-03-08 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:17.426362 | orchestrator | 2026-03-08 01:08:17 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:08:17.427443 | orchestrator | 2026-03-08 01:08:17 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:08:17.428841 | orchestrator | 2026-03-08 01:08:17 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:08:17.429656 | orchestrator | 2026-03-08 01:08:17 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:08:17.429701 | orchestrator | 2026-03-08 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:20.467371 | orchestrator | 2026-03-08 01:08:20 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:08:20.468851 | orchestrator | 2026-03-08 01:08:20 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:08:20.470809 | orchestrator | 2026-03-08 01:08:20 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:08:20.473570 | orchestrator | 2026-03-08 01:08:20 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:08:20.473643 | orchestrator | 2026-03-08 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:23.503606 | orchestrator | 2026-03-08 01:08:23 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:08:23.504113 | orchestrator | 2026-03-08 01:08:23 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state STARTED 2026-03-08 01:08:23.505282 | orchestrator | 2026-03-08 01:08:23 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:08:23.506713 | orchestrator | 2026-03-08 01:08:23 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:08:23.506777 | orchestrator | 2026-03-08 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:26.544735 | orchestrator | 2026-03-08 01:08:26 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:08:26.548413 | orchestrator | 2026-03-08 01:08:26 | INFO  | Task 6799475d-ef45-493d-a345-6ee098b5238b is in state SUCCESS 2026-03-08 01:08:26.550505 | orchestrator | 2026-03-08 01:08:26.550562 | orchestrator | 2026-03-08 01:08:26.550569 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:08:26.550574 | orchestrator | 2026-03-08 01:08:26.550578 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:08:26.550583 | orchestrator | Sunday 08 March 2026 01:05:21 +0000 (0:00:00.232) 0:00:00.232 ********** 2026-03-08 01:08:26.550587 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:08:26.550592 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:08:26.550596 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:08:26.550600 | orchestrator | 2026-03-08 01:08:26.550604 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:08:26.550608 | orchestrator | Sunday 08 March 2026 01:05:22 +0000 (0:00:00.842) 0:00:01.075 ********** 2026-03-08 01:08:26.550612 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-08 01:08:26.550617 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-08 01:08:26.550621 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-08 01:08:26.550625 | orchestrator | 2026-03-08 01:08:26.550629 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-08 01:08:26.550633 | orchestrator | 2026-03-08 01:08:26.550637 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-08 01:08:26.550641 | orchestrator | Sunday 08 March 2026 01:05:23 +0000 (0:00:00.747) 0:00:01.822 ********** 2026-03-08 01:08:26.550646 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:08:26.550652 | orchestrator | 2026-03-08 01:08:26.550673 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-08 01:08:26.550689 | orchestrator | Sunday 08 March 2026 01:05:24 +0000 (0:00:01.039) 0:00:02.862 ********** 2026-03-08 01:08:26.550696 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-08 01:08:26.550702 | orchestrator | 2026-03-08 01:08:26.550709 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-08 01:08:26.550732 | orchestrator | Sunday 08 March 2026 01:05:27 +0000 (0:00:03.151) 0:00:06.013 ********** 2026-03-08 01:08:26.550740 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-08 01:08:26.550748 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-08 01:08:26.550752 | orchestrator | 2026-03-08 01:08:26.550756 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-08 01:08:26.550760 | orchestrator | Sunday 08 March 2026 01:05:33 +0000 (0:00:06.190) 0:00:12.204 ********** 2026-03-08 01:08:26.550764 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-08 01:08:26.550768 | orchestrator | 2026-03-08 01:08:26.550771 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-08 01:08:26.550775 | orchestrator | Sunday 08 March 2026 01:05:36 +0000 (0:00:02.867) 0:00:15.072 ********** 2026-03-08 01:08:26.550779 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-08 01:08:26.550783 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-08 01:08:26.550787 | orchestrator | 2026-03-08 01:08:26.550790 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-08 01:08:26.550794 | orchestrator | Sunday 08 March 2026 01:05:40 +0000 (0:00:03.817) 0:00:18.889 ********** 2026-03-08 01:08:26.550798 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-08 01:08:26.550801 | orchestrator | 2026-03-08 01:08:26.550805 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-08 01:08:26.550809 | orchestrator | Sunday 08 March 2026 01:05:44 +0000 (0:00:03.591) 0:00:22.481 ********** 2026-03-08 01:08:26.550813 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-08 01:08:26.550831 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-08 01:08:26.550835 | orchestrator | 2026-03-08 01:08:26.550839 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-08 01:08:26.550843 | orchestrator | Sunday 08 March 2026 01:05:51 +0000 (0:00:07.018) 0:00:29.499 ********** 2026-03-08 01:08:26.550850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:08:26.550870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:08:26.550879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:08:26.550884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.550890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.550900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.550907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.550919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.550976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.550985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.550992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.551269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.551280 | orchestrator | 2026-03-08 01:08:26.551285 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-08 01:08:26.551289 | orchestrator | Sunday 08 March 2026 01:05:53 +0000 (0:00:02.369) 0:00:31.869 ********** 2026-03-08 01:08:26.551294 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:08:26.551298 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:08:26.551303 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:08:26.551307 | orchestrator | 2026-03-08 01:08:26.551312 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-08 01:08:26.551316 | orchestrator | Sunday 08 March 2026 01:05:53 +0000 (0:00:00.437) 0:00:32.306 ********** 2026-03-08 01:08:26.551321 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:08:26.551325 | orchestrator | 2026-03-08 01:08:26.551334 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-08 01:08:26.551339 | orchestrator | Sunday 08 March 2026 01:05:54 +0000 (0:00:00.769) 0:00:33.075 ********** 2026-03-08 01:08:26.551344 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-08 01:08:26.551348 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-08 01:08:26.551352 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-08 01:08:26.551357 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-08 01:08:26.551361 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-08 01:08:26.551365 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-08 01:08:26.551370 | orchestrator | 2026-03-08 01:08:26.551374 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-08 01:08:26.551379 | orchestrator | Sunday 08 March 2026 01:05:56 +0000 (0:00:02.203) 0:00:35.279 ********** 2026-03-08 01:08:26.551398 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-08 01:08:26.551409 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-08 01:08:26.551415 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-08 01:08:26.551419 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-08 01:08:26.551429 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-08 01:08:26.551437 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-08 01:08:26.551449 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-08 01:08:26.551455 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-08 01:08:26.551460 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-08 01:08:26.551470 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-08 01:08:26.551474 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-08 01:08:26.551486 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-08 01:08:26.551490 | orchestrator | 2026-03-08 01:08:26.551494 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-08 01:08:26.551497 | orchestrator | Sunday 08 March 2026 01:06:00 +0000 (0:00:03.758) 0:00:39.039 ********** 2026-03-08 01:08:26.551501 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-08 01:08:26.551506 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-08 01:08:26.551510 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-08 01:08:26.551513 | orchestrator | 2026-03-08 01:08:26.551517 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-08 01:08:26.551521 | orchestrator | Sunday 08 March 2026 01:06:02 +0000 (0:00:02.199) 0:00:41.238 ********** 2026-03-08 01:08:26.551525 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-08 01:08:26.551528 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-08 01:08:26.551532 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-08 01:08:26.551536 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-08 01:08:26.551540 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-08 01:08:26.551543 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-08 01:08:26.551547 | orchestrator | 2026-03-08 01:08:26.551551 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-08 01:08:26.551555 | orchestrator | Sunday 08 March 2026 01:06:06 +0000 (0:00:03.658) 0:00:44.897 ********** 2026-03-08 01:08:26.551559 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-08 01:08:26.551564 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-08 01:08:26.551569 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-08 01:08:26.551575 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-08 01:08:26.551619 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-08 01:08:26.551627 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-08 01:08:26.551632 | orchestrator | 2026-03-08 01:08:26.551638 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-08 01:08:26.551644 | orchestrator | Sunday 08 March 2026 01:06:07 +0000 (0:00:01.163) 0:00:46.061 ********** 2026-03-08 01:08:26.551650 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:08:26.551657 | orchestrator | 2026-03-08 01:08:26.551663 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-08 01:08:26.551668 | orchestrator | Sunday 08 March 2026 01:06:07 +0000 (0:00:00.274) 0:00:46.335 ********** 2026-03-08 01:08:26.551777 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:08:26.551782 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:08:26.551798 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:08:26.551803 | orchestrator | 2026-03-08 01:08:26.551807 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-08 01:08:26.551811 | orchestrator | Sunday 08 March 2026 01:06:08 +0000 (0:00:00.615) 0:00:46.951 ********** 2026-03-08 01:08:26.551820 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:08:26.551824 | orchestrator | 2026-03-08 01:08:26.551828 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-08 01:08:26.551832 | orchestrator | Sunday 08 March 2026 01:06:09 +0000 (0:00:01.290) 0:00:48.241 ********** 2026-03-08 01:08:26.551837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:08:26.551857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:08:26.551861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:08:26.551866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.551880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.551889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.551897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.551902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.551906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.551910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.551923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.551957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.551963 | orchestrator | 2026-03-08 01:08:26.551967 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-08 01:08:26.551971 | orchestrator | Sunday 08 March 2026 01:06:14 +0000 (0:00:04.836) 0:00:53.077 ********** 2026-03-08 01:08:26.551978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-08 01:08:26.551982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:08:26.551986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 01:08:26.551993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 01:08:26.552004 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:08:26.552011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-08 01:08:26.552024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:08:26.552033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 01:08:26.552038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 01:08:26.552045 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:08:26.552051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-08 01:08:26.552068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:08:26.552074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 01:08:26.552086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 01:08:26.552092 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:08:26.552098 | orchestrator | 2026-03-08 01:08:26.552105 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-08 01:08:26.552110 | orchestrator | Sunday 08 March 2026 01:06:15 +0000 (0:00:01.125) 0:00:54.203 ********** 2026-03-08 01:08:26.552116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-08 01:08:26.552128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:08:26.552139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 01:08:26.552147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 01:08:26.552153 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:08:26.552164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-08 01:08:26.552170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:08:26.552176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 01:08:26.552187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 01:08:26.552196 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:08:26.552203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-08 01:08:26.552212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:08:26.552218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 01:08:26.552224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 01:08:26.552234 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:08:26.552240 | orchestrator | 2026-03-08 01:08:26.552246 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-08 01:08:26.552252 | orchestrator | Sunday 08 March 2026 01:06:17 +0000 (0:00:01.565) 0:00:55.768 ********** 2026-03-08 01:08:26.552258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:08:26.552268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:08:26.552278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:08:26.552284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.552290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.552300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.552311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.552318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.552327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.552333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.552343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.552348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.552354 | orchestrator | 2026-03-08 01:08:26.552360 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-08 01:08:26.552367 | orchestrator | Sunday 08 March 2026 01:06:21 +0000 (0:00:04.402) 0:01:00.171 ********** 2026-03-08 01:08:26.552373 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-08 01:08:26.552382 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-08 01:08:26.552388 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-08 01:08:26.552393 | orchestrator | 2026-03-08 01:08:26.552399 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-08 01:08:26.552405 | orchestrator | Sunday 08 March 2026 01:06:24 +0000 (0:00:02.386) 0:01:02.557 ********** 2026-03-08 01:08:26.552411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:08:26.552422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:08:26.552435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:08:26.552442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.552454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.552461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.552471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.552477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.552488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.552494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.552506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.552512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.552519 | orchestrator | 2026-03-08 01:08:26.552525 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-08 01:08:26.552531 | orchestrator | Sunday 08 March 2026 01:06:37 +0000 (0:00:13.669) 0:01:16.226 ********** 2026-03-08 01:08:26.552538 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:08:26.552545 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:08:26.552550 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:08:26.552556 | orchestrator | 2026-03-08 01:08:26.552563 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-08 01:08:26.552573 | orchestrator | Sunday 08 March 2026 01:06:39 +0000 (0:00:01.973) 0:01:18.200 ********** 2026-03-08 01:08:26.552586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-08 01:08:26.552593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:08:26.552601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 01:08:26.552612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 01:08:26.552617 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:08:26.552623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-08 01:08:26.552637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:08:26.552645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 01:08:26.552651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 01:08:26.552656 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:08:26.552667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-08 01:08:26.552768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:08:26.552783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 01:08:26.552796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 01:08:26.552803 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:08:26.552809 | orchestrator | 2026-03-08 01:08:26.552816 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-08 01:08:26.552823 | orchestrator | Sunday 08 March 2026 01:06:40 +0000 (0:00:00.682) 0:01:18.882 ********** 2026-03-08 01:08:26.552829 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:08:26.552835 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:08:26.552841 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:08:26.552847 | orchestrator | 2026-03-08 01:08:26.552854 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-08 01:08:26.552861 | orchestrator | Sunday 08 March 2026 01:06:40 +0000 (0:00:00.412) 0:01:19.294 ********** 2026-03-08 01:08:26.552868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:08:26.552883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:08:26.552891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:08:26.552907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.552915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.552921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.552962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.552976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.552992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.553002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.553009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.553016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:08:26.553022 | orchestrator | 2026-03-08 01:08:26.553028 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-08 01:08:26.553034 | orchestrator | Sunday 08 March 2026 01:06:43 +0000 (0:00:03.111) 0:01:22.406 ********** 2026-03-08 01:08:26.553040 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:08:26.553045 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:08:26.553051 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:08:26.553058 | orchestrator | 2026-03-08 01:08:26.553064 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-08 01:08:26.553070 | orchestrator | Sunday 08 March 2026 01:06:44 +0000 (0:00:00.970) 0:01:23.377 ********** 2026-03-08 01:08:26.553076 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:08:26.553082 | orchestrator | 2026-03-08 01:08:26.553088 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-08 01:08:26.553095 | orchestrator | Sunday 08 March 2026 01:06:47 +0000 (0:00:02.123) 0:01:25.500 ********** 2026-03-08 01:08:26.553101 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:08:26.553108 | orchestrator | 2026-03-08 01:08:26.553120 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-08 01:08:26.553130 | orchestrator | Sunday 08 March 2026 01:06:49 +0000 (0:00:02.272) 0:01:27.773 ********** 2026-03-08 01:08:26.553135 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:08:26.553141 | orchestrator | 2026-03-08 01:08:26.553146 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-08 01:08:26.553152 | orchestrator | Sunday 08 March 2026 01:07:09 +0000 (0:00:19.750) 0:01:47.524 ********** 2026-03-08 01:08:26.553158 | orchestrator | 2026-03-08 01:08:26.553163 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-08 01:08:26.553169 | orchestrator | Sunday 08 March 2026 01:07:09 +0000 (0:00:00.062) 0:01:47.586 ********** 2026-03-08 01:08:26.553175 | orchestrator | 2026-03-08 01:08:26.553181 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-08 01:08:26.553187 | orchestrator | Sunday 08 March 2026 01:07:09 +0000 (0:00:00.060) 0:01:47.647 ********** 2026-03-08 01:08:26.553193 | orchestrator | 2026-03-08 01:08:26.553199 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-08 01:08:26.553205 | orchestrator | Sunday 08 March 2026 01:07:09 +0000 (0:00:00.075) 0:01:47.722 ********** 2026-03-08 01:08:26.553211 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:08:26.553217 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:08:26.553222 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:08:26.553228 | orchestrator | 2026-03-08 01:08:26.553234 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-08 01:08:26.553240 | orchestrator | Sunday 08 March 2026 01:07:39 +0000 (0:00:30.005) 0:02:17.727 ********** 2026-03-08 01:08:26.553246 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:08:26.553252 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:08:26.553258 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:08:26.553264 | orchestrator | 2026-03-08 01:08:26.553270 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-08 01:08:26.553275 | orchestrator | Sunday 08 March 2026 01:07:50 +0000 (0:00:10.927) 0:02:28.655 ********** 2026-03-08 01:08:26.553282 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:08:26.553289 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:08:26.553300 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:08:26.553306 | orchestrator | 2026-03-08 01:08:26.553311 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-08 01:08:26.553317 | orchestrator | Sunday 08 March 2026 01:08:12 +0000 (0:00:22.059) 0:02:50.715 ********** 2026-03-08 01:08:26.553323 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:08:26.553329 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:08:26.553334 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:08:26.553340 | orchestrator | 2026-03-08 01:08:26.553346 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-08 01:08:26.553351 | orchestrator | Sunday 08 March 2026 01:08:24 +0000 (0:00:12.410) 0:03:03.125 ********** 2026-03-08 01:08:26.553357 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:08:26.553364 | orchestrator | 2026-03-08 01:08:26.553371 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:08:26.553379 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-08 01:08:26.553387 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-08 01:08:26.553393 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-08 01:08:26.553400 | orchestrator | 2026-03-08 01:08:26.553409 | orchestrator | 2026-03-08 01:08:26.553415 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:08:26.553422 | orchestrator | Sunday 08 March 2026 01:08:24 +0000 (0:00:00.256) 0:03:03.382 ********** 2026-03-08 01:08:26.553437 | orchestrator | =============================================================================== 2026-03-08 01:08:26.553445 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 30.01s 2026-03-08 01:08:26.553452 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 22.06s 2026-03-08 01:08:26.553458 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.75s 2026-03-08 01:08:26.553466 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 13.67s 2026-03-08 01:08:26.553472 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 12.41s 2026-03-08 01:08:26.553479 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.93s 2026-03-08 01:08:26.553485 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.02s 2026-03-08 01:08:26.553492 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.19s 2026-03-08 01:08:26.553500 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.84s 2026-03-08 01:08:26.553508 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.40s 2026-03-08 01:08:26.553515 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.82s 2026-03-08 01:08:26.553522 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.76s 2026-03-08 01:08:26.553530 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.66s 2026-03-08 01:08:26.553536 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.59s 2026-03-08 01:08:26.553543 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.15s 2026-03-08 01:08:26.553550 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.11s 2026-03-08 01:08:26.553558 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.87s 2026-03-08 01:08:26.553572 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.39s 2026-03-08 01:08:26.553579 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.37s 2026-03-08 01:08:26.553586 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.27s 2026-03-08 01:08:26.553593 | orchestrator | 2026-03-08 01:08:26 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:08:26.555187 | orchestrator | 2026-03-08 01:08:26 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:08:26.555243 | orchestrator | 2026-03-08 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:29.599699 | orchestrator | 2026-03-08 01:08:29 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:08:29.601795 | orchestrator | 2026-03-08 01:08:29 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:08:29.603147 | orchestrator | 2026-03-08 01:08:29 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:08:29.603171 | orchestrator | 2026-03-08 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:32.648885 | orchestrator | 2026-03-08 01:08:32 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:08:32.651175 | orchestrator | 2026-03-08 01:08:32 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:08:32.652983 | orchestrator | 2026-03-08 01:08:32 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:08:32.653023 | orchestrator | 2026-03-08 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:35.692498 | orchestrator | 2026-03-08 01:08:35 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:08:35.694560 | orchestrator | 2026-03-08 01:08:35 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:08:35.696395 | orchestrator | 2026-03-08 01:08:35 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:08:35.696501 | orchestrator | 2026-03-08 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:38.740916 | orchestrator | 2026-03-08 01:08:38 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:08:38.741925 | orchestrator | 2026-03-08 01:08:38 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:08:38.743758 | orchestrator | 2026-03-08 01:08:38 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:08:38.743802 | orchestrator | 2026-03-08 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:41.792865 | orchestrator | 2026-03-08 01:08:41 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:08:41.794743 | orchestrator | 2026-03-08 01:08:41 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:08:41.796409 | orchestrator | 2026-03-08 01:08:41 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:08:41.796880 | orchestrator | 2026-03-08 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:44.845645 | orchestrator | 2026-03-08 01:08:44 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:08:44.847161 | orchestrator | 2026-03-08 01:08:44 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:08:44.848631 | orchestrator | 2026-03-08 01:08:44 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:08:44.848667 | orchestrator | 2026-03-08 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:47.895569 | orchestrator | 2026-03-08 01:08:47 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:08:47.896771 | orchestrator | 2026-03-08 01:08:47 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:08:47.898608 | orchestrator | 2026-03-08 01:08:47 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:08:47.898703 | orchestrator | 2026-03-08 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:50.947577 | orchestrator | 2026-03-08 01:08:50 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:08:50.950337 | orchestrator | 2026-03-08 01:08:50 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:08:50.952627 | orchestrator | 2026-03-08 01:08:50 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:08:50.952800 | orchestrator | 2026-03-08 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:54.011631 | orchestrator | 2026-03-08 01:08:54 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:08:54.012020 | orchestrator | 2026-03-08 01:08:54 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:08:54.014976 | orchestrator | 2026-03-08 01:08:54 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:08:54.015019 | orchestrator | 2026-03-08 01:08:54 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:57.053455 | orchestrator | 2026-03-08 01:08:57 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:08:57.055685 | orchestrator | 2026-03-08 01:08:57 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:08:57.057527 | orchestrator | 2026-03-08 01:08:57 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:08:57.057576 | orchestrator | 2026-03-08 01:08:57 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:00.092128 | orchestrator | 2026-03-08 01:09:00 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:09:00.092886 | orchestrator | 2026-03-08 01:09:00 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:09:00.095365 | orchestrator | 2026-03-08 01:09:00 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:09:00.095429 | orchestrator | 2026-03-08 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:03.141757 | orchestrator | 2026-03-08 01:09:03 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:09:03.142968 | orchestrator | 2026-03-08 01:09:03 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:09:03.144414 | orchestrator | 2026-03-08 01:09:03 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:09:03.144453 | orchestrator | 2026-03-08 01:09:03 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:06.193249 | orchestrator | 2026-03-08 01:09:06 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:09:06.196395 | orchestrator | 2026-03-08 01:09:06 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:09:06.198143 | orchestrator | 2026-03-08 01:09:06 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:09:06.198196 | orchestrator | 2026-03-08 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:09.243809 | orchestrator | 2026-03-08 01:09:09 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:09:09.245564 | orchestrator | 2026-03-08 01:09:09 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:09:09.246669 | orchestrator | 2026-03-08 01:09:09 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:09:09.246760 | orchestrator | 2026-03-08 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:12.300655 | orchestrator | 2026-03-08 01:09:12 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:09:12.301946 | orchestrator | 2026-03-08 01:09:12 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:09:12.304152 | orchestrator | 2026-03-08 01:09:12 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:09:12.304205 | orchestrator | 2026-03-08 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:15.361940 | orchestrator | 2026-03-08 01:09:15 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:09:15.362053 | orchestrator | 2026-03-08 01:09:15 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:09:15.362070 | orchestrator | 2026-03-08 01:09:15 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:09:15.362081 | orchestrator | 2026-03-08 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:18.397940 | orchestrator | 2026-03-08 01:09:18 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:09:18.399688 | orchestrator | 2026-03-08 01:09:18 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:09:18.401448 | orchestrator | 2026-03-08 01:09:18 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:09:18.401509 | orchestrator | 2026-03-08 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:21.476505 | orchestrator | 2026-03-08 01:09:21 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:09:21.478570 | orchestrator | 2026-03-08 01:09:21 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:09:21.480199 | orchestrator | 2026-03-08 01:09:21 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:09:21.480233 | orchestrator | 2026-03-08 01:09:21 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:24.539415 | orchestrator | 2026-03-08 01:09:24 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:09:24.542928 | orchestrator | 2026-03-08 01:09:24 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:09:24.546513 | orchestrator | 2026-03-08 01:09:24 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:09:24.546576 | orchestrator | 2026-03-08 01:09:24 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:27.593778 | orchestrator | 2026-03-08 01:09:27 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:09:27.593970 | orchestrator | 2026-03-08 01:09:27 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:09:27.595954 | orchestrator | 2026-03-08 01:09:27 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:09:27.596030 | orchestrator | 2026-03-08 01:09:27 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:30.640497 | orchestrator | 2026-03-08 01:09:30 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:09:30.643158 | orchestrator | 2026-03-08 01:09:30 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:09:30.645255 | orchestrator | 2026-03-08 01:09:30 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:09:30.645394 | orchestrator | 2026-03-08 01:09:30 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:33.701498 | orchestrator | 2026-03-08 01:09:33 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:09:33.704909 | orchestrator | 2026-03-08 01:09:33 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:09:33.706122 | orchestrator | 2026-03-08 01:09:33 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:09:33.706157 | orchestrator | 2026-03-08 01:09:33 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:36.755657 | orchestrator | 2026-03-08 01:09:36 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:09:36.757511 | orchestrator | 2026-03-08 01:09:36 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:09:36.758228 | orchestrator | 2026-03-08 01:09:36 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:09:36.758258 | orchestrator | 2026-03-08 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:39.804237 | orchestrator | 2026-03-08 01:09:39 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:09:39.805890 | orchestrator | 2026-03-08 01:09:39 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:09:39.807762 | orchestrator | 2026-03-08 01:09:39 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:09:39.807799 | orchestrator | 2026-03-08 01:09:39 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:42.848268 | orchestrator | 2026-03-08 01:09:42 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:09:42.851150 | orchestrator | 2026-03-08 01:09:42 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:09:42.852836 | orchestrator | 2026-03-08 01:09:42 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:09:42.852871 | orchestrator | 2026-03-08 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:45.896504 | orchestrator | 2026-03-08 01:09:45 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:09:45.898262 | orchestrator | 2026-03-08 01:09:45 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:09:45.899917 | orchestrator | 2026-03-08 01:09:45 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:09:45.899960 | orchestrator | 2026-03-08 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:48.946055 | orchestrator | 2026-03-08 01:09:48 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:09:48.947706 | orchestrator | 2026-03-08 01:09:48 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:09:48.949676 | orchestrator | 2026-03-08 01:09:48 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:09:48.949745 | orchestrator | 2026-03-08 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:51.991998 | orchestrator | 2026-03-08 01:09:51 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:09:51.994101 | orchestrator | 2026-03-08 01:09:51 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:09:51.997680 | orchestrator | 2026-03-08 01:09:52 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:09:51.997745 | orchestrator | 2026-03-08 01:09:52 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:55.038119 | orchestrator | 2026-03-08 01:09:55 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:09:55.039146 | orchestrator | 2026-03-08 01:09:55 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:09:55.040568 | orchestrator | 2026-03-08 01:09:55 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:09:55.040675 | orchestrator | 2026-03-08 01:09:55 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:58.088390 | orchestrator | 2026-03-08 01:09:58 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:09:58.089214 | orchestrator | 2026-03-08 01:09:58 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:09:58.090551 | orchestrator | 2026-03-08 01:09:58 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:09:58.090586 | orchestrator | 2026-03-08 01:09:58 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:01.126415 | orchestrator | 2026-03-08 01:10:01 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:10:01.127370 | orchestrator | 2026-03-08 01:10:01 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:10:01.128383 | orchestrator | 2026-03-08 01:10:01 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:10:01.128420 | orchestrator | 2026-03-08 01:10:01 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:04.181420 | orchestrator | 2026-03-08 01:10:04 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state STARTED 2026-03-08 01:10:04.183879 | orchestrator | 2026-03-08 01:10:04 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:10:04.185797 | orchestrator | 2026-03-08 01:10:04 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:10:04.185926 | orchestrator | 2026-03-08 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:07.229678 | orchestrator | 2026-03-08 01:10:07 | INFO  | Task b6f65489-26f1-411d-bfe2-e52ab5bf6f19 is in state SUCCESS 2026-03-08 01:10:07.230658 | orchestrator | 2026-03-08 01:10:07 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:10:07.232053 | orchestrator | 2026-03-08 01:10:07 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:10:07.232844 | orchestrator | 2026-03-08 01:10:07 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:10:07.233005 | orchestrator | 2026-03-08 01:10:07 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:10.278908 | orchestrator | 2026-03-08 01:10:10 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:10:10.284454 | orchestrator | 2026-03-08 01:10:10 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:10:10.285212 | orchestrator | 2026-03-08 01:10:10 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:10:10.285251 | orchestrator | 2026-03-08 01:10:10 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:13.327002 | orchestrator | 2026-03-08 01:10:13 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:10:13.330195 | orchestrator | 2026-03-08 01:10:13 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:10:13.332760 | orchestrator | 2026-03-08 01:10:13 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:10:13.332844 | orchestrator | 2026-03-08 01:10:13 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:16.368887 | orchestrator | 2026-03-08 01:10:16 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:10:16.371466 | orchestrator | 2026-03-08 01:10:16 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:10:16.372285 | orchestrator | 2026-03-08 01:10:16 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state STARTED 2026-03-08 01:10:16.372352 | orchestrator | 2026-03-08 01:10:16 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:19.415714 | orchestrator | 2026-03-08 01:10:19 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:10:19.417343 | orchestrator | 2026-03-08 01:10:19 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:10:19.421907 | orchestrator | 2026-03-08 01:10:19 | INFO  | Task 26c39ad8-df30-4554-b029-233286c9fa3b is in state SUCCESS 2026-03-08 01:10:19.423422 | orchestrator | 2026-03-08 01:10:19.423460 | orchestrator | 2026-03-08 01:10:19.423466 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:10:19.423471 | orchestrator | 2026-03-08 01:10:19.423474 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:10:19.423479 | orchestrator | Sunday 08 March 2026 01:06:37 +0000 (0:00:00.208) 0:00:00.208 ********** 2026-03-08 01:10:19.423482 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:10:19.423487 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:10:19.423500 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:10:19.423506 | orchestrator | 2026-03-08 01:10:19.423530 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:10:19.423538 | orchestrator | Sunday 08 March 2026 01:06:38 +0000 (0:00:00.440) 0:00:00.649 ********** 2026-03-08 01:10:19.423545 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-03-08 01:10:19.423551 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-03-08 01:10:19.423558 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-03-08 01:10:19.423564 | orchestrator | 2026-03-08 01:10:19.423573 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-03-08 01:10:19.423580 | orchestrator | 2026-03-08 01:10:19.423586 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-03-08 01:10:19.423592 | orchestrator | Sunday 08 March 2026 01:06:39 +0000 (0:00:00.931) 0:00:01.581 ********** 2026-03-08 01:10:19.423612 | orchestrator | 2026-03-08 01:10:19.423618 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-03-08 01:10:19.423624 | orchestrator | 2026-03-08 01:10:19.423631 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-03-08 01:10:19.423637 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:10:19.423668 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:10:19.423672 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:10:19.423675 | orchestrator | 2026-03-08 01:10:19.423679 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:10:19.423684 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:10:19.423688 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:10:19.423692 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:10:19.423696 | orchestrator | 2026-03-08 01:10:19.423700 | orchestrator | 2026-03-08 01:10:19.423703 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:10:19.423707 | orchestrator | Sunday 08 March 2026 01:10:05 +0000 (0:03:25.892) 0:03:27.473 ********** 2026-03-08 01:10:19.423711 | orchestrator | =============================================================================== 2026-03-08 01:10:19.423715 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 205.89s 2026-03-08 01:10:19.423721 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.93s 2026-03-08 01:10:19.423731 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.44s 2026-03-08 01:10:19.423738 | orchestrator | 2026-03-08 01:10:19.423744 | orchestrator | 2026-03-08 01:10:19.423750 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:10:19.423756 | orchestrator | 2026-03-08 01:10:19.423762 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:10:19.423769 | orchestrator | Sunday 08 March 2026 01:07:48 +0000 (0:00:00.269) 0:00:00.269 ********** 2026-03-08 01:10:19.423776 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:10:19.423783 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:10:19.423789 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:10:19.423795 | orchestrator | 2026-03-08 01:10:19.423801 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:10:19.423806 | orchestrator | Sunday 08 March 2026 01:07:48 +0000 (0:00:00.297) 0:00:00.566 ********** 2026-03-08 01:10:19.423810 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-08 01:10:19.423830 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-08 01:10:19.423834 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-08 01:10:19.423838 | orchestrator | 2026-03-08 01:10:19.423842 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-08 01:10:19.423845 | orchestrator | 2026-03-08 01:10:19.423849 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-08 01:10:19.423858 | orchestrator | Sunday 08 March 2026 01:07:49 +0000 (0:00:00.463) 0:00:01.030 ********** 2026-03-08 01:10:19.423862 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:10:19.423866 | orchestrator | 2026-03-08 01:10:19.423870 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-08 01:10:19.423874 | orchestrator | Sunday 08 March 2026 01:07:49 +0000 (0:00:00.548) 0:00:01.578 ********** 2026-03-08 01:10:19.423879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:10:19.423898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:10:19.423903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:10:19.423906 | orchestrator | 2026-03-08 01:10:19.423910 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-08 01:10:19.423914 | orchestrator | Sunday 08 March 2026 01:07:50 +0000 (0:00:00.810) 0:00:02.389 ********** 2026-03-08 01:10:19.423918 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-08 01:10:19.423922 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-08 01:10:19.423926 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 01:10:19.423930 | orchestrator | 2026-03-08 01:10:19.423934 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-08 01:10:19.423937 | orchestrator | Sunday 08 March 2026 01:07:51 +0000 (0:00:01.169) 0:00:03.559 ********** 2026-03-08 01:10:19.423941 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:10:19.423945 | orchestrator | 2026-03-08 01:10:19.423949 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-08 01:10:19.423952 | orchestrator | Sunday 08 March 2026 01:07:52 +0000 (0:00:00.651) 0:00:04.210 ********** 2026-03-08 01:10:19.423958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:10:19.423971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:10:19.423985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:10:19.423991 | orchestrator | 2026-03-08 01:10:19.424062 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-08 01:10:19.424070 | orchestrator | Sunday 08 March 2026 01:07:54 +0000 (0:00:01.470) 0:00:05.681 ********** 2026-03-08 01:10:19.424078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-08 01:10:19.424083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-08 01:10:19.424089 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:10:19.424096 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:10:19.424103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-08 01:10:19.424113 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:10:19.424120 | orchestrator | 2026-03-08 01:10:19.424126 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-08 01:10:19.424133 | orchestrator | Sunday 08 March 2026 01:07:54 +0000 (0:00:00.381) 0:00:06.062 ********** 2026-03-08 01:10:19.424140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-08 01:10:19.424148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-08 01:10:19.424155 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:10:19.424161 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:10:19.424175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-08 01:10:19.424181 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:10:19.424185 | orchestrator | 2026-03-08 01:10:19.424189 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-08 01:10:19.424194 | orchestrator | Sunday 08 March 2026 01:07:55 +0000 (0:00:00.707) 0:00:06.770 ********** 2026-03-08 01:10:19.424198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:10:19.424203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:10:19.424219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:10:19.424226 | orchestrator | 2026-03-08 01:10:19.424233 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-08 01:10:19.424239 | orchestrator | Sunday 08 March 2026 01:07:56 +0000 (0:00:01.188) 0:00:07.958 ********** 2026-03-08 01:10:19.424246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:10:19.424260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:10:19.424268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:10:19.424307 | orchestrator | 2026-03-08 01:10:19.424329 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-08 01:10:19.424333 | orchestrator | Sunday 08 March 2026 01:07:57 +0000 (0:00:01.431) 0:00:09.389 ********** 2026-03-08 01:10:19.424338 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:10:19.424342 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:10:19.424347 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:10:19.424355 | orchestrator | 2026-03-08 01:10:19.424360 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-08 01:10:19.424366 | orchestrator | Sunday 08 March 2026 01:07:58 +0000 (0:00:00.510) 0:00:09.899 ********** 2026-03-08 01:10:19.424373 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-08 01:10:19.424379 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-08 01:10:19.424385 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-08 01:10:19.424392 | orchestrator | 2026-03-08 01:10:19.424398 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-08 01:10:19.424405 | orchestrator | Sunday 08 March 2026 01:07:59 +0000 (0:00:01.115) 0:00:11.015 ********** 2026-03-08 01:10:19.424412 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-08 01:10:19.424419 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-08 01:10:19.424425 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-08 01:10:19.424432 | orchestrator | 2026-03-08 01:10:19.424438 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-08 01:10:19.424445 | orchestrator | Sunday 08 March 2026 01:08:00 +0000 (0:00:01.028) 0:00:12.044 ********** 2026-03-08 01:10:19.424452 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 01:10:19.424458 | orchestrator | 2026-03-08 01:10:19.424465 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-08 01:10:19.424472 | orchestrator | Sunday 08 March 2026 01:08:01 +0000 (0:00:00.668) 0:00:12.712 ********** 2026-03-08 01:10:19.424478 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-08 01:10:19.424483 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-08 01:10:19.424487 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:10:19.424492 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:10:19.424496 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:10:19.424500 | orchestrator | 2026-03-08 01:10:19.424504 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-08 01:10:19.424509 | orchestrator | Sunday 08 March 2026 01:08:01 +0000 (0:00:00.596) 0:00:13.308 ********** 2026-03-08 01:10:19.424513 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:10:19.424517 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:10:19.424522 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:10:19.424526 | orchestrator | 2026-03-08 01:10:19.424530 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-08 01:10:19.424534 | orchestrator | Sunday 08 March 2026 01:08:02 +0000 (0:00:00.462) 0:00:13.771 ********** 2026-03-08 01:10:19.424539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1088196, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8276546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.424550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1088196, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8276546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.424559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1088196, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8276546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.424564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1088543, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9179118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.424570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1088543, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9179118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.424575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1088543, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9179118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.424579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1088421, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8899117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.424589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1088421, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8899117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.424612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1088421, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8899117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.424619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1088546, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.922912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.424626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1088546, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.922912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.424633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1088546, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.922912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.424640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1088438, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8939116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.424651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1088438, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8939116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.424666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1088438, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8939116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.424674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1088467, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9149117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.424679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1088467, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9149117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.424683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1088467, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9149117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.424688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1088192, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8262832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.424692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1088192, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8262832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1088192, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8262832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1088203, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8869116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1088203, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8869116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1088203, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8869116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1088425, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8909116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1088425, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8909116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1088425, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8909116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1088450, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8969116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1088450, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8969116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1088450, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8969116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1088540, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9176834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1088540, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9176834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1088540, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9176834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1088412, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8889115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1088412, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8889115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1088412, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8889115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1088466, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8999116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1088466, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8999116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1088466, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8999116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1088439, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8969116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1088439, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8969116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1088439, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8969116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1088434, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8933513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1088434, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8933513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1088434, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8933513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1088429, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.892472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1088429, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.892472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1088429, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.892472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1088454, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.899468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1088454, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.899468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1088454, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.899468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1088427, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8919115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1088427, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8919115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1088427, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.8919115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1088537, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9159117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1088537, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9159117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1088537, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9159117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1088841, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0009122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1088841, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0009122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1088582, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.966672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1088841, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929027.0009122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1088582, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.966672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1088567, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.925912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1088582, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.966672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1088567, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.925912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1088733, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.976352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1088567, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.925912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1088733, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.976352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1088560, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9239118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1088733, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.976352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1088560, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9239118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid':2026-03-08 01:10:19 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:19.425501 | orchestrator | False, 'isgid': False}}) 2026-03-08 01:10:19.425509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1088793, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.985912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1088560, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9239118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1088793, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.985912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1088738, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9819121, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1088793, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.985912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1088738, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9819121, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1088798, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9864538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1088738, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9819121, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1088798, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9864538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1088833, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.998182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1088798, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9864538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1088833, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.998182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1088786, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9854975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1088833, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.998182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1088786, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9854975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1088715, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9719121, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1088786, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9854975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1088715, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9719121, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1088574, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9299119, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1088715, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9719121, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1088574, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9299119, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1088700, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.96756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1088574, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9299119, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1088700, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.96756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1088569, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9274697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1088700, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.96756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1088569, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9274697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1088726, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9749122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1088569, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9274697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1088726, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9749122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1088819, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9964933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1088819, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9964933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1088726, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9749122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1088807, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9919121, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1088807, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9919121, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1088819, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9964933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1088562, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9249117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1088562, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9249117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1088807, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9919121, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1088565, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9254665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1088565, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9254665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1088562, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9249117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1088773, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9833658, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1088773, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9833658, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1088565, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9254665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1088805, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9877183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1088805, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9877183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1088773, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9833658, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1088805, 'dev': 141, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772929026.9877183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:10:19.425951 | orchestrator | 2026-03-08 01:10:19.425979 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-08 01:10:19.425987 | orchestrator | Sunday 08 March 2026 01:08:39 +0000 (0:00:37.140) 0:00:50.911 ********** 2026-03-08 01:10:19.426001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:10:19.426010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:10:19.426052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:10:19.426059 | orchestrator | 2026-03-08 01:10:19.426065 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-08 01:10:19.426072 | orchestrator | Sunday 08 March 2026 01:08:40 +0000 (0:00:00.986) 0:00:51.898 ********** 2026-03-08 01:10:19.426079 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:10:19.426085 | orchestrator | 2026-03-08 01:10:19.426091 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-08 01:10:19.426097 | orchestrator | Sunday 08 March 2026 01:08:42 +0000 (0:00:02.293) 0:00:54.192 ********** 2026-03-08 01:10:19.426104 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:10:19.426110 | orchestrator | 2026-03-08 01:10:19.426116 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-08 01:10:19.426123 | orchestrator | Sunday 08 March 2026 01:08:44 +0000 (0:00:02.405) 0:00:56.597 ********** 2026-03-08 01:10:19.426129 | orchestrator | 2026-03-08 01:10:19.426136 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-08 01:10:19.426142 | orchestrator | Sunday 08 March 2026 01:08:45 +0000 (0:00:00.069) 0:00:56.667 ********** 2026-03-08 01:10:19.426149 | orchestrator | 2026-03-08 01:10:19.426155 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-08 01:10:19.426174 | orchestrator | Sunday 08 March 2026 01:08:45 +0000 (0:00:00.063) 0:00:56.731 ********** 2026-03-08 01:10:19.426181 | orchestrator | 2026-03-08 01:10:19.426187 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-08 01:10:19.426193 | orchestrator | Sunday 08 March 2026 01:08:45 +0000 (0:00:00.290) 0:00:57.022 ********** 2026-03-08 01:10:19.426200 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:10:19.426206 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:10:19.426212 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:10:19.426218 | orchestrator | 2026-03-08 01:10:19.426224 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-08 01:10:19.426230 | orchestrator | Sunday 08 March 2026 01:08:47 +0000 (0:00:01.764) 0:00:58.786 ********** 2026-03-08 01:10:19.426236 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:10:19.426243 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:10:19.426249 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-08 01:10:19.426255 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-08 01:10:19.426262 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-03-08 01:10:19.426268 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-03-08 01:10:19.426275 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:10:19.426282 | orchestrator | 2026-03-08 01:10:19.426289 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-08 01:10:19.426312 | orchestrator | Sunday 08 March 2026 01:09:37 +0000 (0:00:50.286) 0:01:49.073 ********** 2026-03-08 01:10:19.426319 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:10:19.426326 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:10:19.426332 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:10:19.426339 | orchestrator | 2026-03-08 01:10:19.426344 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-08 01:10:19.426351 | orchestrator | Sunday 08 March 2026 01:10:11 +0000 (0:00:33.723) 0:02:22.796 ********** 2026-03-08 01:10:19.426357 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:10:19.426364 | orchestrator | 2026-03-08 01:10:19.426370 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-08 01:10:19.426384 | orchestrator | Sunday 08 March 2026 01:10:13 +0000 (0:00:02.292) 0:02:25.089 ********** 2026-03-08 01:10:19.426391 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:10:19.426397 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:10:19.426403 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:10:19.426409 | orchestrator | 2026-03-08 01:10:19.426415 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-08 01:10:19.426421 | orchestrator | Sunday 08 March 2026 01:10:14 +0000 (0:00:00.573) 0:02:25.662 ********** 2026-03-08 01:10:19.426429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-08 01:10:19.426437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-08 01:10:19.426444 | orchestrator | 2026-03-08 01:10:19.426451 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-08 01:10:19.426457 | orchestrator | Sunday 08 March 2026 01:10:16 +0000 (0:00:02.378) 0:02:28.040 ********** 2026-03-08 01:10:19.426463 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:10:19.426469 | orchestrator | 2026-03-08 01:10:19.426474 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:10:19.426481 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-08 01:10:19.426488 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-08 01:10:19.426495 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-08 01:10:19.426502 | orchestrator | 2026-03-08 01:10:19.426508 | orchestrator | 2026-03-08 01:10:19.426514 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:10:19.426520 | orchestrator | Sunday 08 March 2026 01:10:16 +0000 (0:00:00.303) 0:02:28.344 ********** 2026-03-08 01:10:19.426526 | orchestrator | =============================================================================== 2026-03-08 01:10:19.426533 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 50.29s 2026-03-08 01:10:19.426539 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.14s 2026-03-08 01:10:19.426545 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 33.72s 2026-03-08 01:10:19.426551 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.41s 2026-03-08 01:10:19.426557 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.38s 2026-03-08 01:10:19.426570 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.29s 2026-03-08 01:10:19.426576 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.29s 2026-03-08 01:10:19.426582 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.76s 2026-03-08 01:10:19.426590 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.47s 2026-03-08 01:10:19.426613 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.43s 2026-03-08 01:10:19.426619 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.19s 2026-03-08 01:10:19.426625 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.17s 2026-03-08 01:10:19.426631 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.12s 2026-03-08 01:10:19.426637 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.03s 2026-03-08 01:10:19.426642 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.99s 2026-03-08 01:10:19.426649 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.81s 2026-03-08 01:10:19.426655 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.71s 2026-03-08 01:10:19.426661 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.67s 2026-03-08 01:10:19.426668 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.65s 2026-03-08 01:10:19.426675 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.60s 2026-03-08 01:10:22.460351 | orchestrator | 2026-03-08 01:10:22 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:10:22.463203 | orchestrator | 2026-03-08 01:10:22 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:10:22.463302 | orchestrator | 2026-03-08 01:10:22 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:25.507138 | orchestrator | 2026-03-08 01:10:25 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:10:25.507468 | orchestrator | 2026-03-08 01:10:25 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:10:25.507694 | orchestrator | 2026-03-08 01:10:25 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:28.544331 | orchestrator | 2026-03-08 01:10:28 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:10:28.546953 | orchestrator | 2026-03-08 01:10:28 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:10:28.547004 | orchestrator | 2026-03-08 01:10:28 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:31.598107 | orchestrator | 2026-03-08 01:10:31 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:10:31.602164 | orchestrator | 2026-03-08 01:10:31 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:10:31.602247 | orchestrator | 2026-03-08 01:10:31 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:34.646062 | orchestrator | 2026-03-08 01:10:34 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:12:34.757032 | orchestrator | 2026-03-08 01:12:34 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:12:34.757121 | orchestrator | 2026-03-08 01:12:34 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:12:37.797925 | orchestrator | 2026-03-08 01:12:37 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:12:37.800579 | orchestrator | 2026-03-08 01:12:37 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:12:37.801504 | orchestrator | 2026-03-08 01:12:37 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:12:40.848920 | orchestrator | 2026-03-08 01:12:40 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:12:40.850830 | orchestrator | 2026-03-08 01:12:40 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:12:40.850884 | orchestrator | 2026-03-08 01:12:40 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:12:43.902695 | orchestrator | 2026-03-08 01:12:43 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:12:43.906000 | orchestrator | 2026-03-08 01:12:43 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:12:43.906412 | orchestrator | 2026-03-08 01:12:43 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:12:46.941873 | orchestrator | 2026-03-08 01:12:46 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:12:46.944577 | orchestrator | 2026-03-08 01:12:46 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:12:46.944657 | orchestrator | 2026-03-08 01:12:46 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:12:49.993002 | orchestrator | 2026-03-08 01:12:49 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:12:49.994701 | orchestrator | 2026-03-08 01:12:49 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:12:49.994763 | orchestrator | 2026-03-08 01:12:49 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:12:53.041863 | orchestrator | 2026-03-08 01:12:53 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:12:53.045700 | orchestrator | 2026-03-08 01:12:53 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:12:53.045753 | orchestrator | 2026-03-08 01:12:53 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:12:56.097035 | orchestrator | 2026-03-08 01:12:56 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:12:56.098160 | orchestrator | 2026-03-08 01:12:56 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:12:56.098187 | orchestrator | 2026-03-08 01:12:56 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:12:59.137215 | orchestrator | 2026-03-08 01:12:59 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:12:59.138236 | orchestrator | 2026-03-08 01:12:59 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:12:59.139275 | orchestrator | 2026-03-08 01:12:59 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:02.174186 | orchestrator | 2026-03-08 01:13:02 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:13:02.175226 | orchestrator | 2026-03-08 01:13:02 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:13:02.175299 | orchestrator | 2026-03-08 01:13:02 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:05.203510 | orchestrator | 2026-03-08 01:13:05 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:13:05.209540 | orchestrator | 2026-03-08 01:13:05 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:13:05.209601 | orchestrator | 2026-03-08 01:13:05 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:08.242473 | orchestrator | 2026-03-08 01:13:08 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:13:08.244612 | orchestrator | 2026-03-08 01:13:08 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:13:08.244670 | orchestrator | 2026-03-08 01:13:08 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:11.278087 | orchestrator | 2026-03-08 01:13:11 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:13:11.281962 | orchestrator | 2026-03-08 01:13:11 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:13:11.282593 | orchestrator | 2026-03-08 01:13:11 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:14.323342 | orchestrator | 2026-03-08 01:13:14 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:13:14.323760 | orchestrator | 2026-03-08 01:13:14 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:13:14.325363 | orchestrator | 2026-03-08 01:13:14 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:17.365370 | orchestrator | 2026-03-08 01:13:17 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:13:17.367156 | orchestrator | 2026-03-08 01:13:17 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:13:17.367222 | orchestrator | 2026-03-08 01:13:17 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:20.422256 | orchestrator | 2026-03-08 01:13:20 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:13:20.424070 | orchestrator | 2026-03-08 01:13:20 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:13:20.424114 | orchestrator | 2026-03-08 01:13:20 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:23.472400 | orchestrator | 2026-03-08 01:13:23 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:13:23.476309 | orchestrator | 2026-03-08 01:13:23 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:13:23.476381 | orchestrator | 2026-03-08 01:13:23 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:26.524719 | orchestrator | 2026-03-08 01:13:26 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:13:26.530368 | orchestrator | 2026-03-08 01:13:26 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:13:26.531629 | orchestrator | 2026-03-08 01:13:26 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:29.577138 | orchestrator | 2026-03-08 01:13:29 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:13:29.577820 | orchestrator | 2026-03-08 01:13:29 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:13:29.577982 | orchestrator | 2026-03-08 01:13:29 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:32.621800 | orchestrator | 2026-03-08 01:13:32 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:13:32.622349 | orchestrator | 2026-03-08 01:13:32 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:13:32.622535 | orchestrator | 2026-03-08 01:13:32 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:35.657134 | orchestrator | 2026-03-08 01:13:35 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:13:35.659818 | orchestrator | 2026-03-08 01:13:35 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:13:35.659907 | orchestrator | 2026-03-08 01:13:35 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:38.700084 | orchestrator | 2026-03-08 01:13:38 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:13:38.702167 | orchestrator | 2026-03-08 01:13:38 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:13:38.702333 | orchestrator | 2026-03-08 01:13:38 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:41.753301 | orchestrator | 2026-03-08 01:13:41 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:13:41.753350 | orchestrator | 2026-03-08 01:13:41 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:13:41.753356 | orchestrator | 2026-03-08 01:13:41 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:44.789597 | orchestrator | 2026-03-08 01:13:44 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:13:44.790192 | orchestrator | 2026-03-08 01:13:44 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:13:44.790219 | orchestrator | 2026-03-08 01:13:44 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:47.831078 | orchestrator | 2026-03-08 01:13:47 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:13:47.831419 | orchestrator | 2026-03-08 01:13:47 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:13:47.831441 | orchestrator | 2026-03-08 01:13:47 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:50.880708 | orchestrator | 2026-03-08 01:13:50 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:13:50.885322 | orchestrator | 2026-03-08 01:13:50 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:13:50.885489 | orchestrator | 2026-03-08 01:13:50 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:53.919666 | orchestrator | 2026-03-08 01:13:53 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:13:53.919794 | orchestrator | 2026-03-08 01:13:53 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:13:53.919930 | orchestrator | 2026-03-08 01:13:53 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:56.952243 | orchestrator | 2026-03-08 01:13:56 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:13:56.953693 | orchestrator | 2026-03-08 01:13:56 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:13:56.953867 | orchestrator | 2026-03-08 01:13:56 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:59.985947 | orchestrator | 2026-03-08 01:13:59 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:13:59.987819 | orchestrator | 2026-03-08 01:13:59 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:13:59.988207 | orchestrator | 2026-03-08 01:13:59 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:03.019947 | orchestrator | 2026-03-08 01:14:03 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:14:03.020800 | orchestrator | 2026-03-08 01:14:03 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:14:03.020856 | orchestrator | 2026-03-08 01:14:03 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:06.066384 | orchestrator | 2026-03-08 01:14:06 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:14:06.069936 | orchestrator | 2026-03-08 01:14:06 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:14:06.070078 | orchestrator | 2026-03-08 01:14:06 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:09.122112 | orchestrator | 2026-03-08 01:14:09 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:14:09.127796 | orchestrator | 2026-03-08 01:14:09 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:14:09.127880 | orchestrator | 2026-03-08 01:14:09 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:12.172549 | orchestrator | 2026-03-08 01:14:12 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:14:12.174654 | orchestrator | 2026-03-08 01:14:12 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:14:12.174706 | orchestrator | 2026-03-08 01:14:12 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:15.220657 | orchestrator | 2026-03-08 01:14:15 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:14:15.221329 | orchestrator | 2026-03-08 01:14:15 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:14:15.221383 | orchestrator | 2026-03-08 01:14:15 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:18.259426 | orchestrator | 2026-03-08 01:14:18 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:14:18.260798 | orchestrator | 2026-03-08 01:14:18 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:14:18.260842 | orchestrator | 2026-03-08 01:14:18 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:21.321192 | orchestrator | 2026-03-08 01:14:21 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:14:21.322312 | orchestrator | 2026-03-08 01:14:21 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:14:21.322341 | orchestrator | 2026-03-08 01:14:21 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:24.370438 | orchestrator | 2026-03-08 01:14:24 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:14:24.371484 | orchestrator | 2026-03-08 01:14:24 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:14:24.371946 | orchestrator | 2026-03-08 01:14:24 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:27.413687 | orchestrator | 2026-03-08 01:14:27 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:14:27.415153 | orchestrator | 2026-03-08 01:14:27 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:14:27.415201 | orchestrator | 2026-03-08 01:14:27 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:30.457394 | orchestrator | 2026-03-08 01:14:30 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:14:30.459256 | orchestrator | 2026-03-08 01:14:30 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:14:30.459324 | orchestrator | 2026-03-08 01:14:30 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:33.511932 | orchestrator | 2026-03-08 01:14:33 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:14:33.513904 | orchestrator | 2026-03-08 01:14:33 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:14:33.513974 | orchestrator | 2026-03-08 01:14:33 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:36.551375 | orchestrator | 2026-03-08 01:14:36 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:14:36.555009 | orchestrator | 2026-03-08 01:14:36 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:14:36.555074 | orchestrator | 2026-03-08 01:14:36 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:39.595698 | orchestrator | 2026-03-08 01:14:39 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:14:39.597554 | orchestrator | 2026-03-08 01:14:39 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state STARTED 2026-03-08 01:14:39.597618 | orchestrator | 2026-03-08 01:14:39 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:42.628789 | orchestrator | 2026-03-08 01:14:42 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:14:42.631223 | orchestrator | 2026-03-08 01:14:42 | INFO  | Task 3a4cc380-26c6-4cf8-81ff-1aba35d87488 is in state SUCCESS 2026-03-08 01:14:42.633097 | orchestrator | 2026-03-08 01:14:42.633144 | orchestrator | 2026-03-08 01:14:42.633160 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:14:42.633175 | orchestrator | 2026-03-08 01:14:42.633188 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-08 01:14:42.633194 | orchestrator | Sunday 08 March 2026 01:05:52 +0000 (0:00:00.421) 0:00:00.421 ********** 2026-03-08 01:14:42.633201 | orchestrator | changed: [testbed-manager] 2026-03-08 01:14:42.633208 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:42.633215 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:14:42.633221 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:14:42.633227 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:14:42.633234 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:14:42.633240 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:14:42.633424 | orchestrator | 2026-03-08 01:14:42.633429 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:14:42.633433 | orchestrator | Sunday 08 March 2026 01:05:52 +0000 (0:00:00.835) 0:00:01.257 ********** 2026-03-08 01:14:42.633437 | orchestrator | changed: [testbed-manager] 2026-03-08 01:14:42.633441 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:42.633445 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:14:42.633448 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:14:42.633452 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:14:42.633456 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:14:42.633460 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:14:42.633463 | orchestrator | 2026-03-08 01:14:42.633467 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:14:42.633478 | orchestrator | Sunday 08 March 2026 01:05:53 +0000 (0:00:00.937) 0:00:02.195 ********** 2026-03-08 01:14:42.633508 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-08 01:14:42.633528 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-08 01:14:42.633534 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-08 01:14:42.633568 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-08 01:14:42.633575 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-08 01:14:42.633618 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-08 01:14:42.633656 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-08 01:14:42.633664 | orchestrator | 2026-03-08 01:14:42.633670 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-08 01:14:42.633676 | orchestrator | 2026-03-08 01:14:42.633682 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-08 01:14:42.633686 | orchestrator | Sunday 08 March 2026 01:05:55 +0000 (0:00:01.590) 0:00:03.785 ********** 2026-03-08 01:14:42.633690 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:14:42.633693 | orchestrator | 2026-03-08 01:14:42.633698 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-08 01:14:42.633702 | orchestrator | Sunday 08 March 2026 01:05:56 +0000 (0:00:01.164) 0:00:04.949 ********** 2026-03-08 01:14:42.633720 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-08 01:14:42.633725 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-08 01:14:42.633730 | orchestrator | 2026-03-08 01:14:42.633734 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-08 01:14:42.633739 | orchestrator | Sunday 08 March 2026 01:06:00 +0000 (0:00:04.005) 0:00:08.954 ********** 2026-03-08 01:14:42.633743 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-08 01:14:42.633748 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-08 01:14:42.633752 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:42.633757 | orchestrator | 2026-03-08 01:14:42.633761 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-08 01:14:42.633766 | orchestrator | Sunday 08 March 2026 01:06:04 +0000 (0:00:04.170) 0:00:13.125 ********** 2026-03-08 01:14:42.633770 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:42.633775 | orchestrator | 2026-03-08 01:14:42.633794 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-08 01:14:42.633799 | orchestrator | Sunday 08 March 2026 01:06:06 +0000 (0:00:01.324) 0:00:14.450 ********** 2026-03-08 01:14:42.633803 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:42.633966 | orchestrator | 2026-03-08 01:14:42.633972 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-08 01:14:42.633976 | orchestrator | Sunday 08 March 2026 01:06:07 +0000 (0:00:01.755) 0:00:16.205 ********** 2026-03-08 01:14:42.633981 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:42.633986 | orchestrator | 2026-03-08 01:14:42.633990 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-08 01:14:42.633995 | orchestrator | Sunday 08 March 2026 01:06:11 +0000 (0:00:04.095) 0:00:20.301 ********** 2026-03-08 01:14:42.633999 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.634041 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.634048 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.634051 | orchestrator | 2026-03-08 01:14:42.634055 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-08 01:14:42.634059 | orchestrator | Sunday 08 March 2026 01:06:12 +0000 (0:00:00.319) 0:00:20.620 ********** 2026-03-08 01:14:42.634063 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:14:42.634067 | orchestrator | 2026-03-08 01:14:42.634071 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-08 01:14:42.634074 | orchestrator | Sunday 08 March 2026 01:06:44 +0000 (0:00:32.245) 0:00:52.867 ********** 2026-03-08 01:14:42.634078 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:42.634082 | orchestrator | 2026-03-08 01:14:42.634085 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-08 01:14:42.634089 | orchestrator | Sunday 08 March 2026 01:07:00 +0000 (0:00:15.789) 0:01:08.657 ********** 2026-03-08 01:14:42.634093 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:14:42.634097 | orchestrator | 2026-03-08 01:14:42.634100 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-08 01:14:42.634104 | orchestrator | Sunday 08 March 2026 01:07:12 +0000 (0:00:12.047) 0:01:20.704 ********** 2026-03-08 01:14:42.634119 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:14:42.634123 | orchestrator | 2026-03-08 01:14:42.634127 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-08 01:14:42.634130 | orchestrator | Sunday 08 March 2026 01:07:13 +0000 (0:00:00.947) 0:01:21.651 ********** 2026-03-08 01:14:42.634134 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.634138 | orchestrator | 2026-03-08 01:14:42.634141 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-08 01:14:42.634145 | orchestrator | Sunday 08 March 2026 01:07:13 +0000 (0:00:00.464) 0:01:22.116 ********** 2026-03-08 01:14:42.634149 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:14:42.634153 | orchestrator | 2026-03-08 01:14:42.634161 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-08 01:14:42.634165 | orchestrator | Sunday 08 March 2026 01:07:14 +0000 (0:00:00.691) 0:01:22.808 ********** 2026-03-08 01:14:42.634169 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:14:42.634187 | orchestrator | 2026-03-08 01:14:42.634192 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-08 01:14:42.634196 | orchestrator | Sunday 08 March 2026 01:07:32 +0000 (0:00:18.052) 0:01:40.861 ********** 2026-03-08 01:14:42.634200 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.634203 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.634207 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.634211 | orchestrator | 2026-03-08 01:14:42.634215 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-08 01:14:42.634218 | orchestrator | 2026-03-08 01:14:42.634222 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-08 01:14:42.634231 | orchestrator | Sunday 08 March 2026 01:07:32 +0000 (0:00:00.331) 0:01:41.192 ********** 2026-03-08 01:14:42.634235 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:14:42.634238 | orchestrator | 2026-03-08 01:14:42.634242 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-08 01:14:42.634246 | orchestrator | Sunday 08 March 2026 01:07:33 +0000 (0:00:00.643) 0:01:41.836 ********** 2026-03-08 01:14:42.634250 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.634253 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.634257 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:42.634261 | orchestrator | 2026-03-08 01:14:42.634264 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-08 01:14:42.634268 | orchestrator | Sunday 08 March 2026 01:07:35 +0000 (0:00:02.003) 0:01:43.839 ********** 2026-03-08 01:14:42.634313 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.634317 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.634320 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:42.634324 | orchestrator | 2026-03-08 01:14:42.634328 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-08 01:14:42.634332 | orchestrator | Sunday 08 March 2026 01:07:37 +0000 (0:00:02.127) 0:01:45.967 ********** 2026-03-08 01:14:42.634336 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.634339 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.634343 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.634347 | orchestrator | 2026-03-08 01:14:42.634351 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-08 01:14:42.634355 | orchestrator | Sunday 08 March 2026 01:07:37 +0000 (0:00:00.347) 0:01:46.315 ********** 2026-03-08 01:14:42.634358 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-08 01:14:42.634362 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.634366 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-08 01:14:42.634370 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.634374 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-08 01:14:42.634377 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-08 01:14:42.634381 | orchestrator | 2026-03-08 01:14:42.634385 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-08 01:14:42.634389 | orchestrator | Sunday 08 March 2026 01:07:47 +0000 (0:00:09.493) 0:01:55.808 ********** 2026-03-08 01:14:42.634392 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.634396 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.634400 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.634404 | orchestrator | 2026-03-08 01:14:42.634407 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-08 01:14:42.634411 | orchestrator | Sunday 08 March 2026 01:07:47 +0000 (0:00:00.352) 0:01:56.160 ********** 2026-03-08 01:14:42.634415 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-08 01:14:42.634422 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.634426 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-08 01:14:42.634430 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.634433 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-08 01:14:42.634437 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.634441 | orchestrator | 2026-03-08 01:14:42.634445 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-08 01:14:42.634449 | orchestrator | Sunday 08 March 2026 01:07:48 +0000 (0:00:00.667) 0:01:56.828 ********** 2026-03-08 01:14:42.634452 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.634456 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.634460 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:42.634464 | orchestrator | 2026-03-08 01:14:42.634467 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-08 01:14:42.634471 | orchestrator | Sunday 08 March 2026 01:07:49 +0000 (0:00:00.698) 0:01:57.527 ********** 2026-03-08 01:14:42.634475 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.634479 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.634482 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:42.634486 | orchestrator | 2026-03-08 01:14:42.634490 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-08 01:14:42.634494 | orchestrator | Sunday 08 March 2026 01:07:50 +0000 (0:00:01.021) 0:01:58.548 ********** 2026-03-08 01:14:42.634497 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.634501 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.634508 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:42.634512 | orchestrator | 2026-03-08 01:14:42.634516 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-08 01:14:42.634520 | orchestrator | Sunday 08 March 2026 01:07:52 +0000 (0:00:02.419) 0:02:00.968 ********** 2026-03-08 01:14:42.634523 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.634527 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.634531 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:14:42.634534 | orchestrator | 2026-03-08 01:14:42.634552 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-08 01:14:42.634559 | orchestrator | Sunday 08 March 2026 01:08:14 +0000 (0:00:21.609) 0:02:22.578 ********** 2026-03-08 01:14:42.634565 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.634571 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.634577 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:14:42.634583 | orchestrator | 2026-03-08 01:14:42.634590 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-08 01:14:42.634596 | orchestrator | Sunday 08 March 2026 01:08:28 +0000 (0:00:14.450) 0:02:37.028 ********** 2026-03-08 01:14:42.634603 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:14:42.634609 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.634615 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.634619 | orchestrator | 2026-03-08 01:14:42.634623 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-08 01:14:42.634626 | orchestrator | Sunday 08 March 2026 01:08:29 +0000 (0:00:01.087) 0:02:38.115 ********** 2026-03-08 01:14:42.634630 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.634634 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.634637 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:42.634641 | orchestrator | 2026-03-08 01:14:42.634645 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-08 01:14:42.634649 | orchestrator | Sunday 08 March 2026 01:08:42 +0000 (0:00:12.483) 0:02:50.599 ********** 2026-03-08 01:14:42.634652 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.634656 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.634660 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.634666 | orchestrator | 2026-03-08 01:14:42.634682 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-08 01:14:42.634694 | orchestrator | Sunday 08 March 2026 01:08:43 +0000 (0:00:01.092) 0:02:51.692 ********** 2026-03-08 01:14:42.634700 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.634706 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.634713 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.634718 | orchestrator | 2026-03-08 01:14:42.634725 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-08 01:14:42.634731 | orchestrator | 2026-03-08 01:14:42.634736 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-08 01:14:42.634743 | orchestrator | Sunday 08 March 2026 01:08:43 +0000 (0:00:00.536) 0:02:52.228 ********** 2026-03-08 01:14:42.634749 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:14:42.634756 | orchestrator | 2026-03-08 01:14:42.634762 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-08 01:14:42.634769 | orchestrator | Sunday 08 March 2026 01:08:44 +0000 (0:00:00.583) 0:02:52.812 ********** 2026-03-08 01:14:42.634775 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-08 01:14:42.634782 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-08 01:14:42.634788 | orchestrator | 2026-03-08 01:14:42.634794 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-08 01:14:42.634800 | orchestrator | Sunday 08 March 2026 01:08:47 +0000 (0:00:03.415) 0:02:56.228 ********** 2026-03-08 01:14:42.634804 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-08 01:14:42.634809 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-08 01:14:42.634813 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-08 01:14:42.634817 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-08 01:14:42.634821 | orchestrator | 2026-03-08 01:14:42.634824 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-08 01:14:42.634828 | orchestrator | Sunday 08 March 2026 01:08:54 +0000 (0:00:06.427) 0:03:02.655 ********** 2026-03-08 01:14:42.634832 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-08 01:14:42.634836 | orchestrator | 2026-03-08 01:14:42.634859 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-08 01:14:42.634867 | orchestrator | Sunday 08 March 2026 01:08:57 +0000 (0:00:02.873) 0:03:05.529 ********** 2026-03-08 01:14:42.634873 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-08 01:14:42.634879 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-08 01:14:42.634885 | orchestrator | 2026-03-08 01:14:42.634892 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-08 01:14:42.634898 | orchestrator | Sunday 08 March 2026 01:09:01 +0000 (0:00:04.020) 0:03:09.549 ********** 2026-03-08 01:14:42.634934 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-08 01:14:42.634941 | orchestrator | 2026-03-08 01:14:42.634947 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-08 01:14:42.634981 | orchestrator | Sunday 08 March 2026 01:09:04 +0000 (0:00:03.219) 0:03:12.768 ********** 2026-03-08 01:14:42.634984 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-08 01:14:42.634988 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-08 01:14:42.634992 | orchestrator | 2026-03-08 01:14:42.634996 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-08 01:14:42.635005 | orchestrator | Sunday 08 March 2026 01:09:11 +0000 (0:00:07.334) 0:03:20.103 ********** 2026-03-08 01:14:42.635039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:42.635088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:42.635097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:42.635108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.635120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.635130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.635137 | orchestrator | 2026-03-08 01:14:42.635144 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-08 01:14:42.635151 | orchestrator | Sunday 08 March 2026 01:09:13 +0000 (0:00:01.393) 0:03:21.497 ********** 2026-03-08 01:14:42.635158 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.635165 | orchestrator | 2026-03-08 01:14:42.635172 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-08 01:14:42.635179 | orchestrator | Sunday 08 March 2026 01:09:13 +0000 (0:00:00.142) 0:03:21.639 ********** 2026-03-08 01:14:42.635186 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.635193 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.635200 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.635207 | orchestrator | 2026-03-08 01:14:42.635214 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-08 01:14:42.635226 | orchestrator | Sunday 08 March 2026 01:09:13 +0000 (0:00:00.556) 0:03:22.195 ********** 2026-03-08 01:14:42.635232 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 01:14:42.635239 | orchestrator | 2026-03-08 01:14:42.635245 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-08 01:14:42.635252 | orchestrator | Sunday 08 March 2026 01:09:14 +0000 (0:00:00.749) 0:03:22.945 ********** 2026-03-08 01:14:42.635265 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.635272 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.635330 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.635338 | orchestrator | 2026-03-08 01:14:42.635344 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-08 01:14:42.635351 | orchestrator | Sunday 08 March 2026 01:09:14 +0000 (0:00:00.390) 0:03:23.335 ********** 2026-03-08 01:14:42.635358 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:14:42.635365 | orchestrator | 2026-03-08 01:14:42.635372 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-08 01:14:42.635378 | orchestrator | Sunday 08 March 2026 01:09:15 +0000 (0:00:00.596) 0:03:23.931 ********** 2026-03-08 01:14:42.635477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:42.635497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:42.635505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:42.635513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.635528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.635573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.635581 | orchestrator | 2026-03-08 01:14:42.635746 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-08 01:14:42.635756 | orchestrator | Sunday 08 March 2026 01:09:18 +0000 (0:00:02.673) 0:03:26.605 ********** 2026-03-08 01:14:42.635767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-08 01:14:42.635775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:14:42.635783 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.635789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-08 01:14:42.635807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:14:42.635814 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.635823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-08 01:14:42.635830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:14:42.635837 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.635843 | orchestrator | 2026-03-08 01:14:42.635850 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-08 01:14:42.635856 | orchestrator | Sunday 08 March 2026 01:09:18 +0000 (0:00:00.652) 0:03:27.257 ********** 2026-03-08 01:14:42.635863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-08 01:14:42.635873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:14:42.635880 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.636006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'2026-03-08 01:14:42 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:42.636018 | orchestrator | container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-08 01:14:42.636025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:14:42.636032 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.636039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-08 01:14:42.636050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:14:42.636101 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.636108 | orchestrator | 2026-03-08 01:14:42.636115 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-08 01:14:42.636211 | orchestrator | Sunday 08 March 2026 01:09:19 +0000 (0:00:00.798) 0:03:28.056 ********** 2026-03-08 01:14:42.636225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:42.636235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:42.636247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:42.636258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.636264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.636274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.636281 | orchestrator | 2026-03-08 01:14:42.636287 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-08 01:14:42.636294 | orchestrator | Sunday 08 March 2026 01:09:22 +0000 (0:00:02.522) 0:03:30.579 ********** 2026-03-08 01:14:42.636301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:42.636315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:42.636326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:42.636335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.636342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.636351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.636358 | orchestrator | 2026-03-08 01:14:42.636364 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-08 01:14:42.636371 | orchestrator | Sunday 08 March 2026 01:09:27 +0000 (0:00:05.769) 0:03:36.349 ********** 2026-03-08 01:14:42.636382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-08 01:14:42.636388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:14:42.636395 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.636404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-08 01:14:42.636415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:14:42.636421 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.636428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-08 01:14:42.636438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:14:42.636445 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.636451 | orchestrator | 2026-03-08 01:14:42.636457 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-08 01:14:42.636464 | orchestrator | Sunday 08 March 2026 01:09:28 +0000 (0:00:00.599) 0:03:36.948 ********** 2026-03-08 01:14:42.636470 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:42.636476 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:14:42.636483 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:14:42.636488 | orchestrator | 2026-03-08 01:14:42.636494 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-08 01:14:42.636502 | orchestrator | Sunday 08 March 2026 01:09:30 +0000 (0:00:01.510) 0:03:38.458 ********** 2026-03-08 01:14:42.636508 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.636514 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.636521 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.636527 | orchestrator | 2026-03-08 01:14:42.636657 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-08 01:14:42.636684 | orchestrator | Sunday 08 March 2026 01:09:30 +0000 (0:00:00.347) 0:03:38.806 ********** 2026-03-08 01:14:42.636699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:42.636707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:42.636723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:42.636733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.636744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.636751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.636757 | orchestrator | 2026-03-08 01:14:42.636761 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-08 01:14:42.636765 | orchestrator | Sunday 08 March 2026 01:09:32 +0000 (0:00:02.049) 0:03:40.855 ********** 2026-03-08 01:14:42.636769 | orchestrator | 2026-03-08 01:14:42.636773 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-08 01:14:42.636776 | orchestrator | Sunday 08 March 2026 01:09:32 +0000 (0:00:00.148) 0:03:41.003 ********** 2026-03-08 01:14:42.636780 | orchestrator | 2026-03-08 01:14:42.636784 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-08 01:14:42.636788 | orchestrator | Sunday 08 March 2026 01:09:32 +0000 (0:00:00.130) 0:03:41.134 ********** 2026-03-08 01:14:42.636791 | orchestrator | 2026-03-08 01:14:42.636795 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-08 01:14:42.636799 | orchestrator | Sunday 08 March 2026 01:09:32 +0000 (0:00:00.132) 0:03:41.266 ********** 2026-03-08 01:14:42.636803 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:42.636807 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:14:42.636811 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:14:42.636815 | orchestrator | 2026-03-08 01:14:42.636818 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-08 01:14:42.636822 | orchestrator | Sunday 08 March 2026 01:09:56 +0000 (0:00:23.289) 0:04:04.555 ********** 2026-03-08 01:14:42.636826 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:42.636830 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:14:42.636833 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:14:42.636837 | orchestrator | 2026-03-08 01:14:42.636841 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-08 01:14:42.636845 | orchestrator | 2026-03-08 01:14:42.636848 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-08 01:14:42.636852 | orchestrator | Sunday 08 March 2026 01:10:01 +0000 (0:00:05.806) 0:04:10.361 ********** 2026-03-08 01:14:42.636859 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:14:42.636864 | orchestrator | 2026-03-08 01:14:42.636867 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-08 01:14:42.636871 | orchestrator | Sunday 08 March 2026 01:10:03 +0000 (0:00:01.180) 0:04:11.542 ********** 2026-03-08 01:14:42.636886 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:42.636891 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:42.636894 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:42.636898 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.636902 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.636906 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.636909 | orchestrator | 2026-03-08 01:14:42.636913 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-08 01:14:42.636917 | orchestrator | Sunday 08 March 2026 01:10:03 +0000 (0:00:00.629) 0:04:12.171 ********** 2026-03-08 01:14:42.636921 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.636924 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.636928 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.636932 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 01:14:42.636936 | orchestrator | 2026-03-08 01:14:42.636940 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-08 01:14:42.636944 | orchestrator | Sunday 08 March 2026 01:10:04 +0000 (0:00:01.071) 0:04:13.243 ********** 2026-03-08 01:14:42.636948 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-08 01:14:42.636952 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-08 01:14:42.636956 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-08 01:14:42.636960 | orchestrator | 2026-03-08 01:14:42.636964 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-08 01:14:42.636969 | orchestrator | Sunday 08 March 2026 01:10:05 +0000 (0:00:00.692) 0:04:13.935 ********** 2026-03-08 01:14:42.636973 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-08 01:14:42.636977 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-08 01:14:42.636981 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-08 01:14:42.636985 | orchestrator | 2026-03-08 01:14:42.636989 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-08 01:14:42.636993 | orchestrator | Sunday 08 March 2026 01:10:06 +0000 (0:00:01.351) 0:04:15.287 ********** 2026-03-08 01:14:42.636997 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-08 01:14:42.637001 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:42.637004 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-08 01:14:42.637008 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:42.637012 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-08 01:14:42.637016 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:42.637019 | orchestrator | 2026-03-08 01:14:42.637024 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-08 01:14:42.637030 | orchestrator | Sunday 08 March 2026 01:10:07 +0000 (0:00:00.568) 0:04:15.856 ********** 2026-03-08 01:14:42.637036 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-08 01:14:42.637046 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-08 01:14:42.637053 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.637059 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-08 01:14:42.637065 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-08 01:14:42.637071 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.637078 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-08 01:14:42.637084 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-08 01:14:42.637090 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-08 01:14:42.637095 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-08 01:14:42.637101 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.637112 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-08 01:14:42.637118 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-08 01:14:42.637125 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-08 01:14:42.637131 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-08 01:14:42.637137 | orchestrator | 2026-03-08 01:14:42.637144 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-08 01:14:42.637150 | orchestrator | Sunday 08 March 2026 01:10:08 +0000 (0:00:01.247) 0:04:17.103 ********** 2026-03-08 01:14:42.637156 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.637163 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.637169 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.637175 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:14:42.637181 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:14:42.637188 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:14:42.637196 | orchestrator | 2026-03-08 01:14:42.637203 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-08 01:14:42.637208 | orchestrator | Sunday 08 March 2026 01:10:09 +0000 (0:00:01.205) 0:04:18.308 ********** 2026-03-08 01:14:42.637216 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.637224 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.637232 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.637239 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:14:42.637245 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:14:42.637252 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:14:42.637259 | orchestrator | 2026-03-08 01:14:42.637265 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-08 01:14:42.637276 | orchestrator | Sunday 08 March 2026 01:10:11 +0000 (0:00:01.955) 0:04:20.264 ********** 2026-03-08 01:14:42.637595 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637618 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637626 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637638 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637642 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637662 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637675 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637679 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637697 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637707 | orchestrator | 2026-03-08 01:14:42.637711 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-08 01:14:42.637715 | orchestrator | Sunday 08 March 2026 01:10:14 +0000 (0:00:02.161) 0:04:22.425 ********** 2026-03-08 01:14:42.637719 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:14:42.637724 | orchestrator | 2026-03-08 01:14:42.637727 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-08 01:14:42.637731 | orchestrator | Sunday 08 March 2026 01:10:15 +0000 (0:00:01.430) 0:04:23.856 ********** 2026-03-08 01:14:42.637735 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637742 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637748 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637762 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637776 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637780 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637793 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637815 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637822 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.637826 | orchestrator | 2026-03-08 01:14:42.637830 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-08 01:14:42.637837 | orchestrator | Sunday 08 March 2026 01:10:19 +0000 (0:00:03.554) 0:04:27.410 ********** 2026-03-08 01:14:42.637843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-08 01:14:42.637847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-08 01:14:42.637851 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-08 01:14:42.637858 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-08 01:14:42.637884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-08 01:14:42.637892 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:42.637907 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-08 01:14:42.637914 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:42.637921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-08 01:14:42.637945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:14:42.637950 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.637953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-08 01:14:42.637961 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-08 01:14:42.637965 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-08 01:14:42.637972 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:42.637979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-08 01:14:42.637983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:14:42.637989 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.637995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-08 01:14:42.638229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:14:42.638242 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.638246 | orchestrator | 2026-03-08 01:14:42.638250 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-08 01:14:42.638254 | orchestrator | Sunday 08 March 2026 01:10:20 +0000 (0:00:01.630) 0:04:29.041 ********** 2026-03-08 01:14:42.638263 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-08 01:14:42.638272 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-08 01:14:42.638279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-08 01:14:42.638283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-08 01:14:42.638288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-08 01:14:42.638291 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:42.638298 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-08 01:14:42.638306 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:42.638310 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-08 01:14:42.638316 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-08 01:14:42.638320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-08 01:14:42.638324 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:42.638328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-08 01:14:42.638332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:14:42.638336 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.638343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-08 01:14:42.638349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:14:42.638353 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.638359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-08 01:14:42.638363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:14:42.638367 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.638371 | orchestrator | 2026-03-08 01:14:42.638375 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-08 01:14:42.638379 | orchestrator | Sunday 08 March 2026 01:10:22 +0000 (0:00:02.233) 0:04:31.275 ********** 2026-03-08 01:14:42.638383 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.638387 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.638390 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.638394 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 01:14:42.638398 | orchestrator | 2026-03-08 01:14:42.638402 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-08 01:14:42.638406 | orchestrator | Sunday 08 March 2026 01:10:23 +0000 (0:00:01.062) 0:04:32.337 ********** 2026-03-08 01:14:42.638409 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-08 01:14:42.638413 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-08 01:14:42.638417 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-08 01:14:42.638421 | orchestrator | 2026-03-08 01:14:42.638424 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-08 01:14:42.638428 | orchestrator | Sunday 08 March 2026 01:10:24 +0000 (0:00:01.013) 0:04:33.351 ********** 2026-03-08 01:14:42.638432 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-08 01:14:42.638436 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-08 01:14:42.638439 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-08 01:14:42.638445 | orchestrator | 2026-03-08 01:14:42.638450 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-08 01:14:42.638456 | orchestrator | Sunday 08 March 2026 01:10:25 +0000 (0:00:00.927) 0:04:34.278 ********** 2026-03-08 01:14:42.638462 | orchestrator | ok: [testbed-node-3] 2026-03-08 01:14:42.638469 | orchestrator | ok: [testbed-node-4] 2026-03-08 01:14:42.638475 | orchestrator | ok: [testbed-node-5] 2026-03-08 01:14:42.638481 | orchestrator | 2026-03-08 01:14:42.638485 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-08 01:14:42.638489 | orchestrator | Sunday 08 March 2026 01:10:26 +0000 (0:00:00.510) 0:04:34.789 ********** 2026-03-08 01:14:42.638493 | orchestrator | ok: [testbed-node-3] 2026-03-08 01:14:42.638496 | orchestrator | ok: [testbed-node-4] 2026-03-08 01:14:42.638500 | orchestrator | ok: [testbed-node-5] 2026-03-08 01:14:42.638504 | orchestrator | 2026-03-08 01:14:42.638507 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-08 01:14:42.638511 | orchestrator | Sunday 08 March 2026 01:10:27 +0000 (0:00:00.760) 0:04:35.550 ********** 2026-03-08 01:14:42.638515 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-08 01:14:42.638519 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-08 01:14:42.638522 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-08 01:14:42.638526 | orchestrator | 2026-03-08 01:14:42.638530 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-08 01:14:42.638536 | orchestrator | Sunday 08 March 2026 01:10:28 +0000 (0:00:01.228) 0:04:36.778 ********** 2026-03-08 01:14:42.638552 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-08 01:14:42.638556 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-08 01:14:42.638560 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-08 01:14:42.638565 | orchestrator | 2026-03-08 01:14:42.638572 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-08 01:14:42.638579 | orchestrator | Sunday 08 March 2026 01:10:29 +0000 (0:00:01.118) 0:04:37.897 ********** 2026-03-08 01:14:42.638587 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-08 01:14:42.638594 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-08 01:14:42.638601 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-08 01:14:42.638609 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-08 01:14:42.638615 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-08 01:14:42.638621 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-08 01:14:42.638627 | orchestrator | 2026-03-08 01:14:42.638634 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-08 01:14:42.638641 | orchestrator | Sunday 08 March 2026 01:10:33 +0000 (0:00:03.822) 0:04:41.719 ********** 2026-03-08 01:14:42.638648 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:42.638655 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:42.638663 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:42.638671 | orchestrator | 2026-03-08 01:14:42.638677 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-08 01:14:42.638689 | orchestrator | Sunday 08 March 2026 01:10:33 +0000 (0:00:00.531) 0:04:42.250 ********** 2026-03-08 01:14:42.638698 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:42.638706 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:42.638714 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:42.638720 | orchestrator | 2026-03-08 01:14:42.638727 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-08 01:14:42.638734 | orchestrator | Sunday 08 March 2026 01:10:34 +0000 (0:00:00.315) 0:04:42.566 ********** 2026-03-08 01:14:42.638741 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:14:42.638748 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:14:42.638755 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:14:42.638762 | orchestrator | 2026-03-08 01:14:42.638768 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-08 01:14:42.638780 | orchestrator | Sunday 08 March 2026 01:10:35 +0000 (0:00:01.212) 0:04:43.779 ********** 2026-03-08 01:14:42.638787 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-08 01:14:42.638794 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-08 01:14:42.638801 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-08 01:14:42.638808 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-08 01:14:42.638816 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-08 01:14:42.638823 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-08 01:14:42.638829 | orchestrator | 2026-03-08 01:14:42.638835 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-08 01:14:42.638840 | orchestrator | Sunday 08 March 2026 01:10:38 +0000 (0:00:03.239) 0:04:47.019 ********** 2026-03-08 01:14:42.638846 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-08 01:14:42.638854 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-08 01:14:42.638863 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-08 01:14:42.638870 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-08 01:14:42.638878 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:14:42.638885 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-08 01:14:42.638892 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:14:42.638899 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-08 01:14:42.638905 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:14:42.638911 | orchestrator | 2026-03-08 01:14:42.638918 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-08 01:14:42.638925 | orchestrator | Sunday 08 March 2026 01:10:42 +0000 (0:00:03.408) 0:04:50.427 ********** 2026-03-08 01:14:42.638933 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:42.638939 | orchestrator | 2026-03-08 01:14:42.638947 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-08 01:14:42.638953 | orchestrator | Sunday 08 March 2026 01:10:42 +0000 (0:00:00.148) 0:04:50.575 ********** 2026-03-08 01:14:42.638959 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:42.638965 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:42.638971 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:42.638977 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.638984 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.638990 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.638997 | orchestrator | 2026-03-08 01:14:42.639005 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-08 01:14:42.639013 | orchestrator | Sunday 08 March 2026 01:10:42 +0000 (0:00:00.585) 0:04:51.161 ********** 2026-03-08 01:14:42.639019 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-08 01:14:42.639025 | orchestrator | 2026-03-08 01:14:42.639033 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-08 01:14:42.639045 | orchestrator | Sunday 08 March 2026 01:10:43 +0000 (0:00:00.685) 0:04:51.846 ********** 2026-03-08 01:14:42.639053 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:42.639060 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:42.639068 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:42.639075 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.639082 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.639090 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.639104 | orchestrator | 2026-03-08 01:14:42.639112 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-08 01:14:42.639119 | orchestrator | Sunday 08 March 2026 01:10:44 +0000 (0:00:00.852) 0:04:52.699 ********** 2026-03-08 01:14:42.639130 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-08 01:14:42.639138 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-08 01:14:42.639144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:42.639151 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-08 01:14:42.639162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:42.639173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:42.639183 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-08 01:14:42.639191 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-08 01:14:42.639198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.639205 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-08 01:14:42.639213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.639225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.639236 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.639247 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.639255 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.639262 | orchestrator | 2026-03-08 01:14:42.639268 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-08 01:14:42.639275 | orchestrator | Sunday 08 March 2026 01:10:48 +0000 (0:00:04.200) 0:04:56.899 ********** 2026-03-08 01:14:42.639282 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-08 01:14:42.639298 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-08 01:14:42.639309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-08 01:14:42.639317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-08 01:14:42.639325 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-08 01:14:42.639331 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-08 01:14:42.639339 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.639358 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.639368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:42.639376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:42.639384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:42.639392 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.639399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.639414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.639421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.639428 | orchestrator | 2026-03-08 01:14:42.639435 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-08 01:14:42.639446 | orchestrator | Sunday 08 March 2026 01:10:54 +0000 (0:00:06.307) 0:05:03.207 ********** 2026-03-08 01:14:42.639452 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:42.639458 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:42.639464 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:42.639471 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.639477 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.639483 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.639490 | orchestrator | 2026-03-08 01:14:42.639497 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-08 01:14:42.639504 | orchestrator | Sunday 08 March 2026 01:10:56 +0000 (0:00:01.499) 0:05:04.706 ********** 2026-03-08 01:14:42.639509 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-08 01:14:42.639513 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-08 01:14:42.639517 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-08 01:14:42.639520 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-08 01:14:42.639524 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-08 01:14:42.639528 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-08 01:14:42.639532 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.639536 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-08 01:14:42.639558 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.639562 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-08 01:14:42.639568 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-08 01:14:42.639574 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.639581 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-08 01:14:42.639592 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-08 01:14:42.639596 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-08 01:14:42.639600 | orchestrator | 2026-03-08 01:14:42.639604 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-08 01:14:42.639607 | orchestrator | Sunday 08 March 2026 01:11:00 +0000 (0:00:03.701) 0:05:08.408 ********** 2026-03-08 01:14:42.639612 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:42.639618 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:42.639622 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:42.639626 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.639629 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.639633 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.639637 | orchestrator | 2026-03-08 01:14:42.639641 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-08 01:14:42.639644 | orchestrator | Sunday 08 March 2026 01:11:00 +0000 (0:00:00.631) 0:05:09.040 ********** 2026-03-08 01:14:42.639648 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-08 01:14:42.639652 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-08 01:14:42.639656 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-08 01:14:42.639660 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-08 01:14:42.639664 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-08 01:14:42.639671 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-08 01:14:42.639675 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-08 01:14:42.639679 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-08 01:14:42.639682 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-08 01:14:42.639686 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-08 01:14:42.639690 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.639694 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-08 01:14:42.639697 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.639701 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-08 01:14:42.639705 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.639709 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-08 01:14:42.639715 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-08 01:14:42.639719 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-08 01:14:42.639723 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-08 01:14:42.639727 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-08 01:14:42.639731 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-08 01:14:42.639739 | orchestrator | 2026-03-08 01:14:42.639743 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-08 01:14:42.639747 | orchestrator | Sunday 08 March 2026 01:11:05 +0000 (0:00:05.221) 0:05:14.262 ********** 2026-03-08 01:14:42.639751 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-08 01:14:42.639755 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-08 01:14:42.639758 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-08 01:14:42.639762 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-08 01:14:42.639766 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-08 01:14:42.639770 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-08 01:14:42.639774 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-08 01:14:42.639778 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-08 01:14:42.639781 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-08 01:14:42.639785 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-08 01:14:42.639791 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-08 01:14:42.639797 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-08 01:14:42.639804 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-08 01:14:42.639810 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.639817 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-08 01:14:42.639823 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.639830 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-08 01:14:42.639836 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.639842 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-08 01:14:42.639848 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-08 01:14:42.639854 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-08 01:14:42.639861 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-08 01:14:42.639867 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-08 01:14:42.639873 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-08 01:14:42.639879 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-08 01:14:42.639885 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-08 01:14:42.639891 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-08 01:14:42.639897 | orchestrator | 2026-03-08 01:14:42.639908 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-08 01:14:42.639914 | orchestrator | Sunday 08 March 2026 01:11:13 +0000 (0:00:07.271) 0:05:21.533 ********** 2026-03-08 01:14:42.639921 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:42.639927 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:42.639933 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:42.639939 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.639945 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.639951 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.639957 | orchestrator | 2026-03-08 01:14:42.639964 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-08 01:14:42.639975 | orchestrator | Sunday 08 March 2026 01:11:13 +0000 (0:00:00.799) 0:05:22.333 ********** 2026-03-08 01:14:42.639982 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:42.639989 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:42.639996 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:42.640002 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.640009 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.640014 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.640020 | orchestrator | 2026-03-08 01:14:42.640027 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-08 01:14:42.640033 | orchestrator | Sunday 08 March 2026 01:11:14 +0000 (0:00:00.643) 0:05:22.976 ********** 2026-03-08 01:14:42.640039 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.640045 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.640052 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.640059 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:14:42.640069 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:14:42.640074 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:14:42.640078 | orchestrator | 2026-03-08 01:14:42.640082 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-08 01:14:42.640086 | orchestrator | Sunday 08 March 2026 01:11:17 +0000 (0:00:02.576) 0:05:25.552 ********** 2026-03-08 01:14:42.640090 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-08 01:14:42.640095 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-08 01:14:42.640099 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-08 01:14:42.640106 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:42.640120 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-08 01:14:42.640134 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-08 01:14:42.640144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-08 01:14:42.640151 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:42.640157 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-08 01:14:42.640163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-08 01:14:42.640174 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-08 01:14:42.640186 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:42.640193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-08 01:14:42.640203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-08 01:14:42.640210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:14:42.640215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:14:42.640221 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.640225 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.640229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-08 01:14:42.640233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:14:42.640241 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.640244 | orchestrator | 2026-03-08 01:14:42.640248 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-08 01:14:42.640252 | orchestrator | Sunday 08 March 2026 01:11:18 +0000 (0:00:01.445) 0:05:26.998 ********** 2026-03-08 01:14:42.640256 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-08 01:14:42.640262 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-08 01:14:42.640266 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:42.640270 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-08 01:14:42.640273 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-08 01:14:42.640277 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:42.640283 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-08 01:14:42.640289 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-08 01:14:42.640296 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:42.640302 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-08 01:14:42.640308 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-08 01:14:42.640314 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.640321 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-08 01:14:42.640328 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-08 01:14:42.640334 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.640340 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-08 01:14:42.640346 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-08 01:14:42.640352 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.640359 | orchestrator | 2026-03-08 01:14:42.640366 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-08 01:14:42.640371 | orchestrator | Sunday 08 March 2026 01:11:19 +0000 (0:00:00.869) 0:05:27.867 ********** 2026-03-08 01:14:42.640383 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-08 01:14:42.640392 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-08 01:14:42.640408 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-08 01:14:42.640420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:42.640427 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-08 01:14:42.640436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:42.640443 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-08 01:14:42.640450 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-08 01:14:42.640460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:42.640467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.640516 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.640527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.640534 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.640573 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.640585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:42.640592 | orchestrator | 2026-03-08 01:14:42.640598 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-08 01:14:42.640604 | orchestrator | Sunday 08 March 2026 01:11:22 +0000 (0:00:02.752) 0:05:30.620 ********** 2026-03-08 01:14:42.640610 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:42.640617 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:42.640623 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:42.640627 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.640630 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.640634 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.640638 | orchestrator | 2026-03-08 01:14:42.640641 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-08 01:14:42.640645 | orchestrator | Sunday 08 March 2026 01:11:23 +0000 (0:00:00.806) 0:05:31.427 ********** 2026-03-08 01:14:42.640649 | orchestrator | 2026-03-08 01:14:42.640653 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-08 01:14:42.640656 | orchestrator | Sunday 08 March 2026 01:11:23 +0000 (0:00:00.141) 0:05:31.569 ********** 2026-03-08 01:14:42.640660 | orchestrator | 2026-03-08 01:14:42.640667 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-08 01:14:42.640671 | orchestrator | Sunday 08 March 2026 01:11:23 +0000 (0:00:00.129) 0:05:31.698 ********** 2026-03-08 01:14:42.640674 | orchestrator | 2026-03-08 01:14:42.640678 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-08 01:14:42.640682 | orchestrator | Sunday 08 March 2026 01:11:23 +0000 (0:00:00.133) 0:05:31.832 ********** 2026-03-08 01:14:42.640686 | orchestrator | 2026-03-08 01:14:42.640689 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-08 01:14:42.640693 | orchestrator | Sunday 08 March 2026 01:11:23 +0000 (0:00:00.129) 0:05:31.962 ********** 2026-03-08 01:14:42.640699 | orchestrator | 2026-03-08 01:14:42.640705 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-08 01:14:42.640712 | orchestrator | Sunday 08 March 2026 01:11:23 +0000 (0:00:00.129) 0:05:32.091 ********** 2026-03-08 01:14:42.640716 | orchestrator | 2026-03-08 01:14:42.640720 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-08 01:14:42.640724 | orchestrator | Sunday 08 March 2026 01:11:24 +0000 (0:00:00.348) 0:05:32.439 ********** 2026-03-08 01:14:42.640727 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:14:42.640731 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:42.640735 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:14:42.640739 | orchestrator | 2026-03-08 01:14:42.640743 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-08 01:14:42.640746 | orchestrator | Sunday 08 March 2026 01:11:36 +0000 (0:00:12.012) 0:05:44.451 ********** 2026-03-08 01:14:42.640750 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:42.640757 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:14:42.640767 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:14:42.640777 | orchestrator | 2026-03-08 01:14:42.640784 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-08 01:14:42.640790 | orchestrator | Sunday 08 March 2026 01:11:51 +0000 (0:00:15.156) 0:05:59.608 ********** 2026-03-08 01:14:42.640796 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:14:42.640801 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:14:42.640808 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:14:42.640813 | orchestrator | 2026-03-08 01:14:42.640819 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-08 01:14:42.640825 | orchestrator | Sunday 08 March 2026 01:12:16 +0000 (0:00:25.458) 0:06:25.067 ********** 2026-03-08 01:14:42.640831 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:14:42.640838 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:14:42.640844 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:14:42.640850 | orchestrator | 2026-03-08 01:14:42.640857 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-08 01:14:42.640862 | orchestrator | Sunday 08 March 2026 01:12:53 +0000 (0:00:36.703) 0:07:01.770 ********** 2026-03-08 01:14:42.640866 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-03-08 01:14:42.640871 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2026-03-08 01:14:42.640879 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-03-08 01:14:42.640888 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:14:42.640894 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:14:42.640900 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:14:42.640905 | orchestrator | 2026-03-08 01:14:42.640912 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-08 01:14:42.640918 | orchestrator | Sunday 08 March 2026 01:12:59 +0000 (0:00:06.345) 0:07:08.116 ********** 2026-03-08 01:14:42.640924 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:14:42.640929 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:14:42.640936 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:14:42.640942 | orchestrator | 2026-03-08 01:14:42.640948 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-08 01:14:42.640954 | orchestrator | Sunday 08 March 2026 01:13:00 +0000 (0:00:00.776) 0:07:08.893 ********** 2026-03-08 01:14:42.640961 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:14:42.640967 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:14:42.640974 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:14:42.640984 | orchestrator | 2026-03-08 01:14:42.640991 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-08 01:14:42.640997 | orchestrator | Sunday 08 March 2026 01:13:31 +0000 (0:00:30.528) 0:07:39.422 ********** 2026-03-08 01:14:42.641004 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:42.641009 | orchestrator | 2026-03-08 01:14:42.641014 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-08 01:14:42.641020 | orchestrator | Sunday 08 March 2026 01:13:31 +0000 (0:00:00.168) 0:07:39.590 ********** 2026-03-08 01:14:42.641026 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:42.641031 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:42.641037 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.641042 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.641047 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.641054 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-08 01:14:42.641060 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-08 01:14:42.641066 | orchestrator | 2026-03-08 01:14:42.641072 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-08 01:14:42.641078 | orchestrator | Sunday 08 March 2026 01:13:51 +0000 (0:00:20.080) 0:07:59.670 ********** 2026-03-08 01:14:42.641091 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:42.641097 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:42.641103 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.641109 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.641115 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:42.641120 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.641127 | orchestrator | 2026-03-08 01:14:42.641133 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-08 01:14:42.641146 | orchestrator | Sunday 08 March 2026 01:14:00 +0000 (0:00:09.052) 0:08:08.722 ********** 2026-03-08 01:14:42.641153 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.641160 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:42.641165 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.641171 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.641178 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:42.641184 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-03-08 01:14:42.641190 | orchestrator | 2026-03-08 01:14:42.641196 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-08 01:14:42.641202 | orchestrator | Sunday 08 March 2026 01:14:04 +0000 (0:00:04.160) 0:08:12.883 ********** 2026-03-08 01:14:42.641208 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-08 01:14:42.641215 | orchestrator | 2026-03-08 01:14:42.641222 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-08 01:14:42.641228 | orchestrator | Sunday 08 March 2026 01:14:18 +0000 (0:00:14.130) 0:08:27.014 ********** 2026-03-08 01:14:42.641235 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-08 01:14:42.641242 | orchestrator | 2026-03-08 01:14:42.641249 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-08 01:14:42.641255 | orchestrator | Sunday 08 March 2026 01:14:19 +0000 (0:00:01.309) 0:08:28.324 ********** 2026-03-08 01:14:42.641261 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:42.641268 | orchestrator | 2026-03-08 01:14:42.641274 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-08 01:14:42.641284 | orchestrator | Sunday 08 March 2026 01:14:21 +0000 (0:00:01.491) 0:08:29.816 ********** 2026-03-08 01:14:42.641290 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-08 01:14:42.641297 | orchestrator | 2026-03-08 01:14:42.641303 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-08 01:14:42.641309 | orchestrator | Sunday 08 March 2026 01:14:33 +0000 (0:00:11.863) 0:08:41.680 ********** 2026-03-08 01:14:42.641316 | orchestrator | ok: [testbed-node-3] 2026-03-08 01:14:42.641322 | orchestrator | ok: [testbed-node-4] 2026-03-08 01:14:42.641328 | orchestrator | ok: [testbed-node-5] 2026-03-08 01:14:42.641335 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:14:42.641341 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:14:42.641347 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:14:42.641353 | orchestrator | 2026-03-08 01:14:42.641360 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-08 01:14:42.641366 | orchestrator | 2026-03-08 01:14:42.641372 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-08 01:14:42.641378 | orchestrator | Sunday 08 March 2026 01:14:35 +0000 (0:00:01.961) 0:08:43.641 ********** 2026-03-08 01:14:42.641385 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:42.641391 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:14:42.641397 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:14:42.641404 | orchestrator | 2026-03-08 01:14:42.641410 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-08 01:14:42.641417 | orchestrator | 2026-03-08 01:14:42.641423 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-08 01:14:42.641430 | orchestrator | Sunday 08 March 2026 01:14:36 +0000 (0:00:01.232) 0:08:44.874 ********** 2026-03-08 01:14:42.641442 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.641449 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.641456 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.641463 | orchestrator | 2026-03-08 01:14:42.641470 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-08 01:14:42.641477 | orchestrator | 2026-03-08 01:14:42.641481 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-08 01:14:42.641485 | orchestrator | Sunday 08 March 2026 01:14:37 +0000 (0:00:00.559) 0:08:45.434 ********** 2026-03-08 01:14:42.641489 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-08 01:14:42.641492 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-08 01:14:42.641497 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-08 01:14:42.641504 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-08 01:14:42.641508 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-08 01:14:42.641512 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-08 01:14:42.641515 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-08 01:14:42.641519 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-08 01:14:42.641523 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-08 01:14:42.641527 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-08 01:14:42.641530 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:42.641534 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-08 01:14:42.641551 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-08 01:14:42.641555 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-08 01:14:42.641559 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-08 01:14:42.641563 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-08 01:14:42.641567 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-08 01:14:42.641570 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:42.641574 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-08 01:14:42.641578 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-08 01:14:42.641582 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-08 01:14:42.641585 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-08 01:14:42.641589 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-08 01:14:42.641593 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-08 01:14:42.641601 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-08 01:14:42.641605 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-08 01:14:42.641609 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:42.641613 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-08 01:14:42.641616 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-08 01:14:42.641620 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-08 01:14:42.641624 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-08 01:14:42.641627 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.641631 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-08 01:14:42.641637 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-08 01:14:42.641644 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.641651 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-08 01:14:42.641658 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-08 01:14:42.641666 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-08 01:14:42.641681 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-08 01:14:42.641689 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-08 01:14:42.641697 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-08 01:14:42.641703 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.641710 | orchestrator | 2026-03-08 01:14:42.641722 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-08 01:14:42.641730 | orchestrator | 2026-03-08 01:14:42.641737 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-08 01:14:42.641743 | orchestrator | Sunday 08 March 2026 01:14:38 +0000 (0:00:01.398) 0:08:46.832 ********** 2026-03-08 01:14:42.641750 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-08 01:14:42.641757 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-08 01:14:42.641764 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.641771 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-08 01:14:42.641778 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-08 01:14:42.641784 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.641791 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-08 01:14:42.641799 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-08 01:14:42.641806 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.641814 | orchestrator | 2026-03-08 01:14:42.641820 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-08 01:14:42.641827 | orchestrator | 2026-03-08 01:14:42.641834 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-08 01:14:42.641840 | orchestrator | Sunday 08 March 2026 01:14:39 +0000 (0:00:00.772) 0:08:47.605 ********** 2026-03-08 01:14:42.641847 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.641854 | orchestrator | 2026-03-08 01:14:42.641861 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-08 01:14:42.641865 | orchestrator | 2026-03-08 01:14:42.641869 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-08 01:14:42.641872 | orchestrator | Sunday 08 March 2026 01:14:39 +0000 (0:00:00.635) 0:08:48.241 ********** 2026-03-08 01:14:42.641876 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:42.641880 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:42.641883 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:42.641887 | orchestrator | 2026-03-08 01:14:42.641891 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:14:42.641895 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:14:42.641900 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-03-08 01:14:42.641904 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-08 01:14:42.641908 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-08 01:14:42.641911 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-08 01:14:42.641915 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-08 01:14:42.641919 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-08 01:14:42.641923 | orchestrator | 2026-03-08 01:14:42.641926 | orchestrator | 2026-03-08 01:14:42.641930 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:14:42.641938 | orchestrator | Sunday 08 March 2026 01:14:40 +0000 (0:00:00.405) 0:08:48.646 ********** 2026-03-08 01:14:42.641941 | orchestrator | =============================================================================== 2026-03-08 01:14:42.641945 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 36.70s 2026-03-08 01:14:42.641949 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 32.25s 2026-03-08 01:14:42.641956 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 30.53s 2026-03-08 01:14:42.641960 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 25.46s 2026-03-08 01:14:42.641964 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 23.29s 2026-03-08 01:14:42.641967 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.61s 2026-03-08 01:14:42.641971 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 20.08s 2026-03-08 01:14:42.641975 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.05s 2026-03-08 01:14:42.641979 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.79s 2026-03-08 01:14:42.641982 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 15.16s 2026-03-08 01:14:42.641986 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.45s 2026-03-08 01:14:42.641990 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.13s 2026-03-08 01:14:42.641994 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.48s 2026-03-08 01:14:42.641997 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.05s 2026-03-08 01:14:42.642001 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.01s 2026-03-08 01:14:42.642005 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.86s 2026-03-08 01:14:42.642011 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.49s 2026-03-08 01:14:42.642038 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.05s 2026-03-08 01:14:42.642042 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.33s 2026-03-08 01:14:42.642046 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.27s 2026-03-08 01:14:45.671796 | orchestrator | 2026-03-08 01:14:45 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:14:45.671850 | orchestrator | 2026-03-08 01:14:45 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:48.732683 | orchestrator | 2026-03-08 01:14:48 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:14:48.732730 | orchestrator | 2026-03-08 01:14:48 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:51.766259 | orchestrator | 2026-03-08 01:14:51 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:14:51.766339 | orchestrator | 2026-03-08 01:14:51 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:54.809626 | orchestrator | 2026-03-08 01:14:54 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:14:54.809670 | orchestrator | 2026-03-08 01:14:54 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:57.854549 | orchestrator | 2026-03-08 01:14:57 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:14:57.854621 | orchestrator | 2026-03-08 01:14:57 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:15:00.905185 | orchestrator | 2026-03-08 01:15:00 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:15:00.905236 | orchestrator | 2026-03-08 01:15:00 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:15:03.950750 | orchestrator | 2026-03-08 01:15:03 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:15:03.950800 | orchestrator | 2026-03-08 01:15:03 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:15:06.992129 | orchestrator | 2026-03-08 01:15:06 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state STARTED 2026-03-08 01:15:06.992216 | orchestrator | 2026-03-08 01:15:06 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:15:10.037385 | orchestrator | 2026-03-08 01:15:10 | INFO  | Task a3f4c85c-2bea-44d6-8782-40f3aa85b5f9 is in state SUCCESS 2026-03-08 01:15:10.038312 | orchestrator | 2026-03-08 01:15:10.038372 | orchestrator | 2026-03-08 01:15:10.038383 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:15:10.038391 | orchestrator | 2026-03-08 01:15:10.038398 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:15:10.038405 | orchestrator | Sunday 08 March 2026 01:10:10 +0000 (0:00:00.266) 0:00:00.266 ********** 2026-03-08 01:15:10.038412 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:15:10.038419 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:15:10.038426 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:15:10.038432 | orchestrator | 2026-03-08 01:15:10.038439 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:15:10.038445 | orchestrator | Sunday 08 March 2026 01:10:10 +0000 (0:00:00.384) 0:00:00.650 ********** 2026-03-08 01:15:10.038452 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-08 01:15:10.038460 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-08 01:15:10.038467 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-08 01:15:10.038474 | orchestrator | 2026-03-08 01:15:10.038481 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-08 01:15:10.038488 | orchestrator | 2026-03-08 01:15:10.038495 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-08 01:15:10.038500 | orchestrator | Sunday 08 March 2026 01:10:10 +0000 (0:00:00.527) 0:00:01.178 ********** 2026-03-08 01:15:10.038508 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:15:10.038515 | orchestrator | 2026-03-08 01:15:10.038521 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-08 01:15:10.038527 | orchestrator | Sunday 08 March 2026 01:10:11 +0000 (0:00:00.644) 0:00:01.822 ********** 2026-03-08 01:15:10.038533 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-08 01:15:10.038540 | orchestrator | 2026-03-08 01:15:10.038566 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-08 01:15:10.038572 | orchestrator | Sunday 08 March 2026 01:10:15 +0000 (0:00:03.713) 0:00:05.536 ********** 2026-03-08 01:15:10.038579 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-08 01:15:10.038586 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-08 01:15:10.038592 | orchestrator | 2026-03-08 01:15:10.038678 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-08 01:15:10.038691 | orchestrator | Sunday 08 March 2026 01:10:21 +0000 (0:00:06.335) 0:00:11.872 ********** 2026-03-08 01:15:10.038701 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-08 01:15:10.038707 | orchestrator | 2026-03-08 01:15:10.038714 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-08 01:15:10.038740 | orchestrator | Sunday 08 March 2026 01:10:24 +0000 (0:00:03.219) 0:00:15.091 ********** 2026-03-08 01:15:10.038748 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-08 01:15:10.038757 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-08 01:15:10.038767 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-08 01:15:10.038792 | orchestrator | 2026-03-08 01:15:10.039185 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-08 01:15:10.039194 | orchestrator | Sunday 08 March 2026 01:10:32 +0000 (0:00:08.058) 0:00:23.149 ********** 2026-03-08 01:15:10.039201 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-08 01:15:10.039209 | orchestrator | 2026-03-08 01:15:10.039215 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-08 01:15:10.039223 | orchestrator | Sunday 08 March 2026 01:10:36 +0000 (0:00:03.307) 0:00:26.456 ********** 2026-03-08 01:15:10.039231 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-08 01:15:10.039237 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-08 01:15:10.039243 | orchestrator | 2026-03-08 01:15:10.039249 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-08 01:15:10.039257 | orchestrator | Sunday 08 March 2026 01:10:43 +0000 (0:00:07.302) 0:00:33.758 ********** 2026-03-08 01:15:10.039263 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-08 01:15:10.039269 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-08 01:15:10.039277 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-08 01:15:10.039283 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-08 01:15:10.039288 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-08 01:15:10.039294 | orchestrator | 2026-03-08 01:15:10.039300 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-08 01:15:10.039306 | orchestrator | Sunday 08 March 2026 01:10:59 +0000 (0:00:15.743) 0:00:49.501 ********** 2026-03-08 01:15:10.039313 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:15:10.039319 | orchestrator | 2026-03-08 01:15:10.039325 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-08 01:15:10.039331 | orchestrator | Sunday 08 March 2026 01:10:59 +0000 (0:00:00.569) 0:00:50.071 ********** 2026-03-08 01:15:10.039337 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:10.039344 | orchestrator | 2026-03-08 01:15:10.039351 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-08 01:15:10.039358 | orchestrator | Sunday 08 March 2026 01:11:05 +0000 (0:00:05.323) 0:00:55.394 ********** 2026-03-08 01:15:10.039366 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:10.039372 | orchestrator | 2026-03-08 01:15:10.039378 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-08 01:15:10.039399 | orchestrator | Sunday 08 March 2026 01:11:09 +0000 (0:00:04.583) 0:00:59.978 ********** 2026-03-08 01:15:10.039408 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:15:10.039415 | orchestrator | 2026-03-08 01:15:10.039421 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-08 01:15:10.039428 | orchestrator | Sunday 08 March 2026 01:11:12 +0000 (0:00:03.114) 0:01:03.092 ********** 2026-03-08 01:15:10.039434 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-08 01:15:10.039441 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-08 01:15:10.039447 | orchestrator | 2026-03-08 01:15:10.039454 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-08 01:15:10.039460 | orchestrator | Sunday 08 March 2026 01:11:23 +0000 (0:00:10.440) 0:01:13.533 ********** 2026-03-08 01:15:10.039467 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-08 01:15:10.039473 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-08 01:15:10.039481 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-08 01:15:10.039488 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-08 01:15:10.039508 | orchestrator | 2026-03-08 01:15:10.039515 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-08 01:15:10.039521 | orchestrator | Sunday 08 March 2026 01:11:38 +0000 (0:00:15.172) 0:01:28.706 ********** 2026-03-08 01:15:10.039527 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:10.039534 | orchestrator | 2026-03-08 01:15:10.039543 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-08 01:15:10.039550 | orchestrator | Sunday 08 March 2026 01:11:43 +0000 (0:00:04.606) 0:01:33.313 ********** 2026-03-08 01:15:10.039557 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:10.039563 | orchestrator | 2026-03-08 01:15:10.039569 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-08 01:15:10.039575 | orchestrator | Sunday 08 March 2026 01:11:48 +0000 (0:00:05.256) 0:01:38.570 ********** 2026-03-08 01:15:10.039582 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:15:10.039587 | orchestrator | 2026-03-08 01:15:10.039594 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-08 01:15:10.039617 | orchestrator | Sunday 08 March 2026 01:11:48 +0000 (0:00:00.269) 0:01:38.839 ********** 2026-03-08 01:15:10.039625 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:15:10.039629 | orchestrator | 2026-03-08 01:15:10.039635 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-08 01:15:10.039652 | orchestrator | Sunday 08 March 2026 01:11:52 +0000 (0:00:03.936) 0:01:42.775 ********** 2026-03-08 01:15:10.039659 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:15:10.039665 | orchestrator | 2026-03-08 01:15:10.039672 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-08 01:15:10.039679 | orchestrator | Sunday 08 March 2026 01:11:53 +0000 (0:00:01.323) 0:01:44.099 ********** 2026-03-08 01:15:10.039685 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:10.039691 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:15:10.039697 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:15:10.039702 | orchestrator | 2026-03-08 01:15:10.039709 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-08 01:15:10.039714 | orchestrator | Sunday 08 March 2026 01:11:59 +0000 (0:00:06.137) 0:01:50.236 ********** 2026-03-08 01:15:10.039719 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:10.039728 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:15:10.040087 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:15:10.040123 | orchestrator | 2026-03-08 01:15:10.040130 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-08 01:15:10.040137 | orchestrator | Sunday 08 March 2026 01:12:04 +0000 (0:00:04.348) 0:01:54.585 ********** 2026-03-08 01:15:10.040143 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:10.040149 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:15:10.040156 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:15:10.040162 | orchestrator | 2026-03-08 01:15:10.040169 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-08 01:15:10.040175 | orchestrator | Sunday 08 March 2026 01:12:05 +0000 (0:00:00.811) 0:01:55.396 ********** 2026-03-08 01:15:10.040181 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:15:10.040188 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:15:10.040194 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:15:10.040200 | orchestrator | 2026-03-08 01:15:10.040206 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-08 01:15:10.040212 | orchestrator | Sunday 08 March 2026 01:12:07 +0000 (0:00:02.035) 0:01:57.431 ********** 2026-03-08 01:15:10.040218 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:15:10.040224 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:10.040230 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:15:10.040236 | orchestrator | 2026-03-08 01:15:10.040242 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-08 01:15:10.040259 | orchestrator | Sunday 08 March 2026 01:12:08 +0000 (0:00:01.365) 0:01:58.797 ********** 2026-03-08 01:15:10.040265 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:10.040271 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:15:10.040277 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:15:10.040283 | orchestrator | 2026-03-08 01:15:10.040289 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-08 01:15:10.040295 | orchestrator | Sunday 08 March 2026 01:12:09 +0000 (0:00:01.162) 0:01:59.960 ********** 2026-03-08 01:15:10.040301 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:10.040307 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:15:10.040313 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:15:10.040319 | orchestrator | 2026-03-08 01:15:10.040369 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-08 01:15:10.040379 | orchestrator | Sunday 08 March 2026 01:12:11 +0000 (0:00:02.003) 0:02:01.963 ********** 2026-03-08 01:15:10.040385 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:10.040391 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:15:10.040397 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:15:10.040403 | orchestrator | 2026-03-08 01:15:10.040410 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-08 01:15:10.040416 | orchestrator | Sunday 08 March 2026 01:12:13 +0000 (0:00:01.768) 0:02:03.732 ********** 2026-03-08 01:15:10.040422 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:15:10.040427 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:15:10.040433 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:15:10.040437 | orchestrator | 2026-03-08 01:15:10.040441 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-08 01:15:10.040445 | orchestrator | Sunday 08 March 2026 01:12:14 +0000 (0:00:00.742) 0:02:04.474 ********** 2026-03-08 01:15:10.040449 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:15:10.040455 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:15:10.040462 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:15:10.040467 | orchestrator | 2026-03-08 01:15:10.040473 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-08 01:15:10.040503 | orchestrator | Sunday 08 March 2026 01:12:17 +0000 (0:00:02.950) 0:02:07.425 ********** 2026-03-08 01:15:10.040510 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:15:10.040518 | orchestrator | 2026-03-08 01:15:10.040522 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-08 01:15:10.040526 | orchestrator | Sunday 08 March 2026 01:12:17 +0000 (0:00:00.818) 0:02:08.243 ********** 2026-03-08 01:15:10.040530 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:15:10.040534 | orchestrator | 2026-03-08 01:15:10.040538 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-08 01:15:10.040543 | orchestrator | Sunday 08 March 2026 01:12:22 +0000 (0:00:04.202) 0:02:12.446 ********** 2026-03-08 01:15:10.040547 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:15:10.040553 | orchestrator | 2026-03-08 01:15:10.040558 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-08 01:15:10.040564 | orchestrator | Sunday 08 March 2026 01:12:25 +0000 (0:00:03.481) 0:02:15.928 ********** 2026-03-08 01:15:10.040570 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-08 01:15:10.040576 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-08 01:15:10.040582 | orchestrator | 2026-03-08 01:15:10.040588 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-08 01:15:10.040594 | orchestrator | Sunday 08 March 2026 01:12:32 +0000 (0:00:06.956) 0:02:22.884 ********** 2026-03-08 01:15:10.040618 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:15:10.040625 | orchestrator | 2026-03-08 01:15:10.040640 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-08 01:15:10.040647 | orchestrator | Sunday 08 March 2026 01:12:36 +0000 (0:00:04.147) 0:02:27.032 ********** 2026-03-08 01:15:10.040661 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:15:10.040667 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:15:10.040673 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:15:10.040679 | orchestrator | 2026-03-08 01:15:10.040684 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-08 01:15:10.040690 | orchestrator | Sunday 08 March 2026 01:12:37 +0000 (0:00:00.416) 0:02:27.449 ********** 2026-03-08 01:15:10.040699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:10.040741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:10.040750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:10.040759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:10.040772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:10.040787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:10.040794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.040803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.040831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.040839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.040848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.040866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.040873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:10.040880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:10.040908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:10.040918 | orchestrator | 2026-03-08 01:15:10.040925 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-08 01:15:10.040932 | orchestrator | Sunday 08 March 2026 01:12:39 +0000 (0:00:02.560) 0:02:30.009 ********** 2026-03-08 01:15:10.040937 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:15:10.040942 | orchestrator | 2026-03-08 01:15:10.040947 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-08 01:15:10.040952 | orchestrator | Sunday 08 March 2026 01:12:39 +0000 (0:00:00.137) 0:02:30.147 ********** 2026-03-08 01:15:10.040957 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:15:10.040963 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:15:10.040969 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:15:10.040980 | orchestrator | 2026-03-08 01:15:10.040986 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-08 01:15:10.040993 | orchestrator | Sunday 08 March 2026 01:12:40 +0000 (0:00:00.590) 0:02:30.738 ********** 2026-03-08 01:15:10.041000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-08 01:15:10.041019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 01:15:10.041027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 01:15:10.041034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 01:15:10.041042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:15:10.041050 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:15:10.041077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-08 01:15:10.041092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 01:15:10.041107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 01:15:10.041114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 01:15:10.041121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:15:10.041128 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:15:10.041155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-08 01:15:10.041164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 01:15:10.041177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 01:15:10.041189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 01:15:10.041195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:15:10.041202 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:15:10.041208 | orchestrator | 2026-03-08 01:15:10.041215 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-08 01:15:10.041222 | orchestrator | Sunday 08 March 2026 01:12:41 +0000 (0:00:00.895) 0:02:31.633 ********** 2026-03-08 01:15:10.041229 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:15:10.041237 | orchestrator | 2026-03-08 01:15:10.041243 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-08 01:15:10.041251 | orchestrator | Sunday 08 March 2026 01:12:42 +0000 (0:00:00.614) 0:02:32.248 ********** 2026-03-08 01:15:10.041258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:10.041290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:10.041306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:10.041316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:10.041323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:10.041330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:10.041336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.041343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.041351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.041355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.041362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.041368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.041374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:10.041388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:10.041400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:10.041407 | orchestrator | 2026-03-08 01:15:10.041413 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-08 01:15:10.041419 | orchestrator | Sunday 08 March 2026 01:12:47 +0000 (0:00:05.270) 0:02:37.518 ********** 2026-03-08 01:15:10.041424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-08 01:15:10.041433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 01:15:10.041439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 01:15:10.041445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 01:15:10.041457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:15:10.041468 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:15:10.041475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-08 01:15:10.041481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 01:15:10.041491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 01:15:10.041497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 01:15:10.041504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:15:10.041515 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:15:10.041527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-08 01:15:10.041534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 01:15:10.041540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 01:15:10.041551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 01:15:10.041558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:15:10.041564 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:15:10.041570 | orchestrator | 2026-03-08 01:15:10.041575 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-08 01:15:10.041581 | orchestrator | Sunday 08 March 2026 01:12:47 +0000 (0:00:00.673) 0:02:38.192 ********** 2026-03-08 01:15:10.041587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-08 01:15:10.041655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 01:15:10.041665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 01:15:10.041671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 01:15:10.041681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:15:10.041688 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:15:10.041695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-08 01:15:10.041712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 01:15:10.041725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 01:15:10.041732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 01:15:10.041738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:15:10.041744 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:15:10.041755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-08 01:15:10.041761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 01:15:10.041774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 01:15:10.041786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 01:15:10.041792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:15:10.041798 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:15:10.041804 | orchestrator | 2026-03-08 01:15:10.041810 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-08 01:15:10.041816 | orchestrator | Sunday 08 March 2026 01:12:48 +0000 (0:00:00.918) 0:02:39.110 ********** 2026-03-08 01:15:10.041827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:10.041834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:10.041845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:10.041857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:10.041864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:10.041870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:10.041880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.041887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.041900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.041906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.041918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.041925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.041932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:10.041942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:10.041955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:10.041962 | orchestrator | 2026-03-08 01:15:10.041968 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-08 01:15:10.041974 | orchestrator | Sunday 08 March 2026 01:12:53 +0000 (0:00:05.025) 0:02:44.136 ********** 2026-03-08 01:15:10.041980 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-08 01:15:10.041987 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-08 01:15:10.041993 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-08 01:15:10.041999 | orchestrator | 2026-03-08 01:15:10.042006 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-08 01:15:10.042043 | orchestrator | Sunday 08 March 2026 01:12:55 +0000 (0:00:01.927) 0:02:46.063 ********** 2026-03-08 01:15:10.042061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:10.042070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:10.042081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:10.042095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:10.042102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:10.042108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:10.042118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.042123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.042127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.042135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.042143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.042147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.042151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:10.042161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:10.042165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:10.042169 | orchestrator | 2026-03-08 01:15:10.042174 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-08 01:15:10.042177 | orchestrator | Sunday 08 March 2026 01:13:13 +0000 (0:00:17.996) 0:03:04.060 ********** 2026-03-08 01:15:10.042182 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:10.042186 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:15:10.042194 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:15:10.042197 | orchestrator | 2026-03-08 01:15:10.042201 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-08 01:15:10.042205 | orchestrator | Sunday 08 March 2026 01:13:15 +0000 (0:00:01.601) 0:03:05.662 ********** 2026-03-08 01:15:10.042209 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-08 01:15:10.042216 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-08 01:15:10.042221 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-08 01:15:10.042230 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-08 01:15:10.042240 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-08 01:15:10.042251 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-08 01:15:10.042257 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-08 01:15:10.042264 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-08 01:15:10.042270 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-08 01:15:10.042277 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-08 01:15:10.042284 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-08 01:15:10.042289 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-08 01:15:10.042292 | orchestrator | 2026-03-08 01:15:10.042298 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-08 01:15:10.042304 | orchestrator | Sunday 08 March 2026 01:13:20 +0000 (0:00:05.269) 0:03:10.932 ********** 2026-03-08 01:15:10.042310 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-08 01:15:10.042316 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-08 01:15:10.042322 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-08 01:15:10.042328 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-08 01:15:10.042335 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-08 01:15:10.042341 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-08 01:15:10.042347 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-08 01:15:10.042355 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-08 01:15:10.042359 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-08 01:15:10.042366 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-08 01:15:10.042371 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-08 01:15:10.042378 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-08 01:15:10.042384 | orchestrator | 2026-03-08 01:15:10.042390 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-08 01:15:10.042397 | orchestrator | Sunday 08 March 2026 01:13:26 +0000 (0:00:05.535) 0:03:16.467 ********** 2026-03-08 01:15:10.042403 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-08 01:15:10.042410 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-08 01:15:10.042415 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-08 01:15:10.042422 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-08 01:15:10.042428 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-08 01:15:10.042434 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-08 01:15:10.042441 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-08 01:15:10.042448 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-08 01:15:10.042461 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-08 01:15:10.042469 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-08 01:15:10.042482 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-08 01:15:10.042489 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-08 01:15:10.042495 | orchestrator | 2026-03-08 01:15:10.042505 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-08 01:15:10.042514 | orchestrator | Sunday 08 March 2026 01:13:31 +0000 (0:00:05.104) 0:03:21.572 ********** 2026-03-08 01:15:10.042520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:10.042533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:10.042540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:10.042546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:10.042557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:10.042572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:10.042578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.042589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.042596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.042645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.042652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.042672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:10.042678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:10.042685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:10.042697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:10.042703 | orchestrator | 2026-03-08 01:15:10.042710 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-08 01:15:10.042716 | orchestrator | Sunday 08 March 2026 01:13:35 +0000 (0:00:04.306) 0:03:25.878 ********** 2026-03-08 01:15:10.042722 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:15:10.042728 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:15:10.042735 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:15:10.042741 | orchestrator | 2026-03-08 01:15:10.042747 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-08 01:15:10.042754 | orchestrator | Sunday 08 March 2026 01:13:35 +0000 (0:00:00.288) 0:03:26.167 ********** 2026-03-08 01:15:10.042761 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:10.042766 | orchestrator | 2026-03-08 01:15:10.042770 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-08 01:15:10.042773 | orchestrator | Sunday 08 March 2026 01:13:37 +0000 (0:00:01.952) 0:03:28.120 ********** 2026-03-08 01:15:10.042777 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:10.042781 | orchestrator | 2026-03-08 01:15:10.042785 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-08 01:15:10.042789 | orchestrator | Sunday 08 March 2026 01:13:40 +0000 (0:00:02.193) 0:03:30.313 ********** 2026-03-08 01:15:10.042792 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:10.042801 | orchestrator | 2026-03-08 01:15:10.042808 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-08 01:15:10.042815 | orchestrator | Sunday 08 March 2026 01:13:42 +0000 (0:00:02.250) 0:03:32.564 ********** 2026-03-08 01:15:10.042822 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:10.042832 | orchestrator | 2026-03-08 01:15:10.042838 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-08 01:15:10.042844 | orchestrator | Sunday 08 March 2026 01:13:44 +0000 (0:00:02.657) 0:03:35.222 ********** 2026-03-08 01:15:10.042850 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:10.042856 | orchestrator | 2026-03-08 01:15:10.042862 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-08 01:15:10.042867 | orchestrator | Sunday 08 March 2026 01:14:06 +0000 (0:00:21.956) 0:03:57.178 ********** 2026-03-08 01:15:10.042873 | orchestrator | 2026-03-08 01:15:10.042879 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-08 01:15:10.042885 | orchestrator | Sunday 08 March 2026 01:14:07 +0000 (0:00:00.072) 0:03:57.250 ********** 2026-03-08 01:15:10.042891 | orchestrator | 2026-03-08 01:15:10.042899 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-08 01:15:10.042905 | orchestrator | Sunday 08 March 2026 01:14:07 +0000 (0:00:00.064) 0:03:57.315 ********** 2026-03-08 01:15:10.042911 | orchestrator | 2026-03-08 01:15:10.042916 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-08 01:15:10.042946 | orchestrator | Sunday 08 March 2026 01:14:07 +0000 (0:00:00.076) 0:03:57.392 ********** 2026-03-08 01:15:10.042954 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:10.042960 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:15:10.042966 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:15:10.042971 | orchestrator | 2026-03-08 01:15:10.042976 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-08 01:15:10.042980 | orchestrator | Sunday 08 March 2026 01:14:24 +0000 (0:00:17.224) 0:04:14.616 ********** 2026-03-08 01:15:10.042983 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:10.042987 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:15:10.042991 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:15:10.042994 | orchestrator | 2026-03-08 01:15:10.042998 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-08 01:15:10.043002 | orchestrator | Sunday 08 March 2026 01:14:36 +0000 (0:00:11.827) 0:04:26.443 ********** 2026-03-08 01:15:10.043005 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:10.043009 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:15:10.043013 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:15:10.043017 | orchestrator | 2026-03-08 01:15:10.043021 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-08 01:15:10.043026 | orchestrator | Sunday 08 March 2026 01:14:47 +0000 (0:00:10.952) 0:04:37.396 ********** 2026-03-08 01:15:10.043032 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:10.043038 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:15:10.043046 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:15:10.043055 | orchestrator | 2026-03-08 01:15:10.043061 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-08 01:15:10.043066 | orchestrator | Sunday 08 March 2026 01:14:57 +0000 (0:00:10.229) 0:04:47.625 ********** 2026-03-08 01:15:10.043073 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:10.043079 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:15:10.043085 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:15:10.043092 | orchestrator | 2026-03-08 01:15:10.043098 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:15:10.043105 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-08 01:15:10.043113 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-08 01:15:10.043127 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-08 01:15:10.043135 | orchestrator | 2026-03-08 01:15:10.043141 | orchestrator | 2026-03-08 01:15:10.043148 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:15:10.043159 | orchestrator | Sunday 08 March 2026 01:15:08 +0000 (0:00:10.930) 0:04:58.556 ********** 2026-03-08 01:15:10.043166 | orchestrator | =============================================================================== 2026-03-08 01:15:10.043172 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.96s 2026-03-08 01:15:10.043191 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 18.00s 2026-03-08 01:15:10.043197 | orchestrator | octavia : Restart octavia-api container -------------------------------- 17.22s 2026-03-08 01:15:10.043204 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.74s 2026-03-08 01:15:10.043210 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.17s 2026-03-08 01:15:10.043217 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.83s 2026-03-08 01:15:10.043223 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.95s 2026-03-08 01:15:10.043229 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.93s 2026-03-08 01:15:10.043236 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.44s 2026-03-08 01:15:10.043242 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.23s 2026-03-08 01:15:10.043249 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.06s 2026-03-08 01:15:10.043255 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.30s 2026-03-08 01:15:10.043261 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.96s 2026-03-08 01:15:10.043268 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.34s 2026-03-08 01:15:10.043274 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 6.14s 2026-03-08 01:15:10.043280 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.54s 2026-03-08 01:15:10.043286 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.32s 2026-03-08 01:15:10.043293 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.27s 2026-03-08 01:15:10.043298 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.27s 2026-03-08 01:15:10.043304 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.26s 2026-03-08 01:15:10.043311 | orchestrator | 2026-03-08 01:15:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:13.076110 | orchestrator | 2026-03-08 01:15:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:16.112807 | orchestrator | 2026-03-08 01:15:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:19.146850 | orchestrator | 2026-03-08 01:15:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:22.193093 | orchestrator | 2026-03-08 01:15:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:25.232142 | orchestrator | 2026-03-08 01:15:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:28.276498 | orchestrator | 2026-03-08 01:15:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:31.315627 | orchestrator | 2026-03-08 01:15:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:34.357339 | orchestrator | 2026-03-08 01:15:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:37.392582 | orchestrator | 2026-03-08 01:15:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:40.433933 | orchestrator | 2026-03-08 01:15:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:43.477427 | orchestrator | 2026-03-08 01:15:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:46.528075 | orchestrator | 2026-03-08 01:15:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:49.568867 | orchestrator | 2026-03-08 01:15:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:52.612050 | orchestrator | 2026-03-08 01:15:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:55.650449 | orchestrator | 2026-03-08 01:15:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:58.691216 | orchestrator | 2026-03-08 01:15:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:16:01.730666 | orchestrator | 2026-03-08 01:16:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:16:04.767950 | orchestrator | 2026-03-08 01:16:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:16:07.814979 | orchestrator | 2026-03-08 01:16:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:16:10.856507 | orchestrator | 2026-03-08 01:16:11.183216 | orchestrator | 2026-03-08 01:16:11.188414 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sun Mar 8 01:16:11 UTC 2026 2026-03-08 01:16:11.188489 | orchestrator | 2026-03-08 01:16:11.518568 | orchestrator | ok: Runtime: 0:36:31.677465 2026-03-08 01:16:11.789972 | 2026-03-08 01:16:11.790197 | TASK [Bootstrap services] 2026-03-08 01:16:12.589540 | orchestrator | 2026-03-08 01:16:12.589675 | orchestrator | # BOOTSTRAP 2026-03-08 01:16:12.589688 | orchestrator | 2026-03-08 01:16:12.589736 | orchestrator | + set -e 2026-03-08 01:16:12.589744 | orchestrator | + echo 2026-03-08 01:16:12.589752 | orchestrator | + echo '# BOOTSTRAP' 2026-03-08 01:16:12.589763 | orchestrator | + echo 2026-03-08 01:16:12.589791 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-08 01:16:12.598872 | orchestrator | + set -e 2026-03-08 01:16:12.598971 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-08 01:16:17.553053 | orchestrator | 2026-03-08 01:16:17 | INFO  | It takes a moment until task b37b8daa-4867-47a2-8042-5a8b35f366b0 (flavor-manager) has been started and output is visible here. 2026-03-08 01:16:25.428685 | orchestrator | 2026-03-08 01:16:20 | INFO  | Flavor SCS-1L-1 created 2026-03-08 01:16:25.428899 | orchestrator | 2026-03-08 01:16:21 | INFO  | Flavor SCS-1L-1-5 created 2026-03-08 01:16:25.428914 | orchestrator | 2026-03-08 01:16:21 | INFO  | Flavor SCS-1V-2 created 2026-03-08 01:16:25.428922 | orchestrator | 2026-03-08 01:16:21 | INFO  | Flavor SCS-1V-2-5 created 2026-03-08 01:16:25.428930 | orchestrator | 2026-03-08 01:16:21 | INFO  | Flavor SCS-1V-4 created 2026-03-08 01:16:25.428937 | orchestrator | 2026-03-08 01:16:22 | INFO  | Flavor SCS-1V-4-10 created 2026-03-08 01:16:25.428945 | orchestrator | 2026-03-08 01:16:22 | INFO  | Flavor SCS-1V-8 created 2026-03-08 01:16:25.428952 | orchestrator | 2026-03-08 01:16:22 | INFO  | Flavor SCS-1V-8-20 created 2026-03-08 01:16:25.428972 | orchestrator | 2026-03-08 01:16:22 | INFO  | Flavor SCS-2V-4 created 2026-03-08 01:16:25.428979 | orchestrator | 2026-03-08 01:16:22 | INFO  | Flavor SCS-2V-4-10 created 2026-03-08 01:16:25.428985 | orchestrator | 2026-03-08 01:16:22 | INFO  | Flavor SCS-2V-8 created 2026-03-08 01:16:25.428992 | orchestrator | 2026-03-08 01:16:22 | INFO  | Flavor SCS-2V-8-20 created 2026-03-08 01:16:25.428999 | orchestrator | 2026-03-08 01:16:23 | INFO  | Flavor SCS-2V-16 created 2026-03-08 01:16:25.429005 | orchestrator | 2026-03-08 01:16:23 | INFO  | Flavor SCS-2V-16-50 created 2026-03-08 01:16:25.429012 | orchestrator | 2026-03-08 01:16:23 | INFO  | Flavor SCS-4V-8 created 2026-03-08 01:16:25.429019 | orchestrator | 2026-03-08 01:16:23 | INFO  | Flavor SCS-4V-8-20 created 2026-03-08 01:16:25.429026 | orchestrator | 2026-03-08 01:16:23 | INFO  | Flavor SCS-4V-16 created 2026-03-08 01:16:25.429033 | orchestrator | 2026-03-08 01:16:23 | INFO  | Flavor SCS-4V-16-50 created 2026-03-08 01:16:25.429040 | orchestrator | 2026-03-08 01:16:23 | INFO  | Flavor SCS-4V-32 created 2026-03-08 01:16:25.429046 | orchestrator | 2026-03-08 01:16:23 | INFO  | Flavor SCS-4V-32-100 created 2026-03-08 01:16:25.429053 | orchestrator | 2026-03-08 01:16:24 | INFO  | Flavor SCS-8V-16 created 2026-03-08 01:16:25.429060 | orchestrator | 2026-03-08 01:16:24 | INFO  | Flavor SCS-8V-16-50 created 2026-03-08 01:16:25.429066 | orchestrator | 2026-03-08 01:16:24 | INFO  | Flavor SCS-8V-32 created 2026-03-08 01:16:25.429073 | orchestrator | 2026-03-08 01:16:24 | INFO  | Flavor SCS-8V-32-100 created 2026-03-08 01:16:25.429080 | orchestrator | 2026-03-08 01:16:24 | INFO  | Flavor SCS-16V-32 created 2026-03-08 01:16:25.429087 | orchestrator | 2026-03-08 01:16:24 | INFO  | Flavor SCS-16V-32-100 created 2026-03-08 01:16:25.429093 | orchestrator | 2026-03-08 01:16:24 | INFO  | Flavor SCS-2V-4-20s created 2026-03-08 01:16:25.429100 | orchestrator | 2026-03-08 01:16:25 | INFO  | Flavor SCS-4V-8-50s created 2026-03-08 01:16:25.429107 | orchestrator | 2026-03-08 01:16:25 | INFO  | Flavor SCS-8V-32-100s created 2026-03-08 01:16:27.759028 | orchestrator | 2026-03-08 01:16:27 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-08 01:16:38.054978 | orchestrator | 2026-03-08 01:16:38 | INFO  | Task 8aab1969-7827-476e-8768-cc0c7f370552 (bootstrap-basic) was prepared for execution. 2026-03-08 01:16:38.055100 | orchestrator | 2026-03-08 01:16:38 | INFO  | It takes a moment until task 8aab1969-7827-476e-8768-cc0c7f370552 (bootstrap-basic) has been started and output is visible here. 2026-03-08 01:17:24.198063 | orchestrator | 2026-03-08 01:17:24.198119 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-08 01:17:24.198125 | orchestrator | 2026-03-08 01:17:24.198129 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-08 01:17:24.198133 | orchestrator | Sunday 08 March 2026 01:16:42 +0000 (0:00:00.071) 0:00:00.071 ********** 2026-03-08 01:17:24.198138 | orchestrator | ok: [localhost] 2026-03-08 01:17:24.198142 | orchestrator | 2026-03-08 01:17:24.198146 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-08 01:17:24.198150 | orchestrator | Sunday 08 March 2026 01:16:44 +0000 (0:00:01.938) 0:00:02.010 ********** 2026-03-08 01:17:24.198154 | orchestrator | ok: [localhost] 2026-03-08 01:17:24.198158 | orchestrator | 2026-03-08 01:17:24.198162 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-08 01:17:24.198166 | orchestrator | Sunday 08 March 2026 01:16:53 +0000 (0:00:09.175) 0:00:11.185 ********** 2026-03-08 01:17:24.198170 | orchestrator | changed: [localhost] 2026-03-08 01:17:24.198174 | orchestrator | 2026-03-08 01:17:24.198178 | orchestrator | TASK [Create public network] *************************************************** 2026-03-08 01:17:24.198182 | orchestrator | Sunday 08 March 2026 01:17:00 +0000 (0:00:07.455) 0:00:18.640 ********** 2026-03-08 01:17:24.198186 | orchestrator | changed: [localhost] 2026-03-08 01:17:24.198190 | orchestrator | 2026-03-08 01:17:24.198193 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-08 01:17:24.198197 | orchestrator | Sunday 08 March 2026 01:17:05 +0000 (0:00:04.939) 0:00:23.580 ********** 2026-03-08 01:17:24.198203 | orchestrator | changed: [localhost] 2026-03-08 01:17:24.198207 | orchestrator | 2026-03-08 01:17:24.198211 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-08 01:17:24.198215 | orchestrator | Sunday 08 March 2026 01:17:12 +0000 (0:00:06.229) 0:00:29.809 ********** 2026-03-08 01:17:24.198219 | orchestrator | changed: [localhost] 2026-03-08 01:17:24.198223 | orchestrator | 2026-03-08 01:17:24.198227 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-08 01:17:24.198230 | orchestrator | Sunday 08 March 2026 01:17:16 +0000 (0:00:04.382) 0:00:34.192 ********** 2026-03-08 01:17:24.198234 | orchestrator | changed: [localhost] 2026-03-08 01:17:24.198238 | orchestrator | 2026-03-08 01:17:24.198242 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-08 01:17:24.198250 | orchestrator | Sunday 08 March 2026 01:17:20 +0000 (0:00:03.792) 0:00:37.985 ********** 2026-03-08 01:17:24.198254 | orchestrator | ok: [localhost] 2026-03-08 01:17:24.198258 | orchestrator | 2026-03-08 01:17:24.198262 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:17:24.198266 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:17:24.198271 | orchestrator | 2026-03-08 01:17:24.198275 | orchestrator | 2026-03-08 01:17:24.198279 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:17:24.198282 | orchestrator | Sunday 08 March 2026 01:17:23 +0000 (0:00:03.597) 0:00:41.582 ********** 2026-03-08 01:17:24.198286 | orchestrator | =============================================================================== 2026-03-08 01:17:24.198290 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.18s 2026-03-08 01:17:24.198294 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.46s 2026-03-08 01:17:24.198298 | orchestrator | Set public network to default ------------------------------------------- 6.23s 2026-03-08 01:17:24.198301 | orchestrator | Create public network --------------------------------------------------- 4.94s 2026-03-08 01:17:24.198315 | orchestrator | Create public subnet ---------------------------------------------------- 4.38s 2026-03-08 01:17:24.198319 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.79s 2026-03-08 01:17:24.198323 | orchestrator | Create manager role ----------------------------------------------------- 3.60s 2026-03-08 01:17:24.198327 | orchestrator | Gathering Facts --------------------------------------------------------- 1.94s 2026-03-08 01:17:26.580542 | orchestrator | 2026-03-08 01:17:26 | INFO  | It takes a moment until task 927ad497-d0be-420c-8299-50615cd021b6 (image-manager) has been started and output is visible here. 2026-03-08 01:18:09.308529 | orchestrator | 2026-03-08 01:17:29 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-08 01:18:09.308585 | orchestrator | 2026-03-08 01:17:29 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-08 01:18:09.308595 | orchestrator | 2026-03-08 01:17:29 | INFO  | Importing image Cirros 0.6.2 2026-03-08 01:18:09.308602 | orchestrator | 2026-03-08 01:17:29 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-08 01:18:09.308609 | orchestrator | 2026-03-08 01:17:31 | INFO  | Waiting for image to leave queued state... 2026-03-08 01:18:09.308617 | orchestrator | 2026-03-08 01:17:33 | INFO  | Waiting for import to complete... 2026-03-08 01:18:09.308623 | orchestrator | 2026-03-08 01:17:43 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-08 01:18:09.308630 | orchestrator | 2026-03-08 01:17:44 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-08 01:18:09.308636 | orchestrator | 2026-03-08 01:17:44 | INFO  | Setting internal_version = 0.6.2 2026-03-08 01:18:09.308643 | orchestrator | 2026-03-08 01:17:44 | INFO  | Setting image_original_user = cirros 2026-03-08 01:18:09.308650 | orchestrator | 2026-03-08 01:17:44 | INFO  | Adding tag os:cirros 2026-03-08 01:18:09.308656 | orchestrator | 2026-03-08 01:17:44 | INFO  | Setting property architecture: x86_64 2026-03-08 01:18:09.308663 | orchestrator | 2026-03-08 01:17:44 | INFO  | Setting property hw_disk_bus: scsi 2026-03-08 01:18:09.308669 | orchestrator | 2026-03-08 01:17:45 | INFO  | Setting property hw_rng_model: virtio 2026-03-08 01:18:09.308676 | orchestrator | 2026-03-08 01:17:45 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-08 01:18:09.308682 | orchestrator | 2026-03-08 01:17:45 | INFO  | Setting property hw_watchdog_action: reset 2026-03-08 01:18:09.308689 | orchestrator | 2026-03-08 01:17:45 | INFO  | Setting property hypervisor_type: qemu 2026-03-08 01:18:09.308696 | orchestrator | 2026-03-08 01:17:46 | INFO  | Setting property os_distro: cirros 2026-03-08 01:18:09.308702 | orchestrator | 2026-03-08 01:17:46 | INFO  | Setting property os_purpose: minimal 2026-03-08 01:18:09.308708 | orchestrator | 2026-03-08 01:17:46 | INFO  | Setting property replace_frequency: never 2026-03-08 01:18:09.308715 | orchestrator | 2026-03-08 01:17:46 | INFO  | Setting property uuid_validity: none 2026-03-08 01:18:09.308721 | orchestrator | 2026-03-08 01:17:47 | INFO  | Setting property provided_until: none 2026-03-08 01:18:09.308728 | orchestrator | 2026-03-08 01:17:47 | INFO  | Setting property image_description: Cirros 2026-03-08 01:18:09.308734 | orchestrator | 2026-03-08 01:17:47 | INFO  | Setting property image_name: Cirros 2026-03-08 01:18:09.308740 | orchestrator | 2026-03-08 01:17:47 | INFO  | Setting property internal_version: 0.6.2 2026-03-08 01:18:09.308778 | orchestrator | 2026-03-08 01:17:47 | INFO  | Setting property image_original_user: cirros 2026-03-08 01:18:09.308798 | orchestrator | 2026-03-08 01:17:48 | INFO  | Setting property os_version: 0.6.2 2026-03-08 01:18:09.308813 | orchestrator | 2026-03-08 01:17:48 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-08 01:18:09.308822 | orchestrator | 2026-03-08 01:17:48 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-08 01:18:09.308828 | orchestrator | 2026-03-08 01:17:49 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-08 01:18:09.308834 | orchestrator | 2026-03-08 01:17:49 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-08 01:18:09.308841 | orchestrator | 2026-03-08 01:17:49 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-08 01:18:09.308848 | orchestrator | 2026-03-08 01:17:49 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-08 01:18:09.308857 | orchestrator | 2026-03-08 01:17:49 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-08 01:18:09.308864 | orchestrator | 2026-03-08 01:17:49 | INFO  | Importing image Cirros 0.6.3 2026-03-08 01:18:09.308882 | orchestrator | 2026-03-08 01:17:49 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-08 01:18:09.308889 | orchestrator | 2026-03-08 01:17:51 | INFO  | Waiting for image to leave queued state... 2026-03-08 01:18:09.308896 | orchestrator | 2026-03-08 01:17:53 | INFO  | Waiting for import to complete... 2026-03-08 01:18:09.308910 | orchestrator | 2026-03-08 01:18:03 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-08 01:18:09.308917 | orchestrator | 2026-03-08 01:18:03 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-08 01:18:09.308924 | orchestrator | 2026-03-08 01:18:03 | INFO  | Setting internal_version = 0.6.3 2026-03-08 01:18:09.308930 | orchestrator | 2026-03-08 01:18:03 | INFO  | Setting image_original_user = cirros 2026-03-08 01:18:09.308936 | orchestrator | 2026-03-08 01:18:03 | INFO  | Adding tag os:cirros 2026-03-08 01:18:09.308943 | orchestrator | 2026-03-08 01:18:04 | INFO  | Setting property architecture: x86_64 2026-03-08 01:18:09.308949 | orchestrator | 2026-03-08 01:18:04 | INFO  | Setting property hw_disk_bus: scsi 2026-03-08 01:18:09.308955 | orchestrator | 2026-03-08 01:18:04 | INFO  | Setting property hw_rng_model: virtio 2026-03-08 01:18:09.308962 | orchestrator | 2026-03-08 01:18:04 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-08 01:18:09.308968 | orchestrator | 2026-03-08 01:18:04 | INFO  | Setting property hw_watchdog_action: reset 2026-03-08 01:18:09.308974 | orchestrator | 2026-03-08 01:18:05 | INFO  | Setting property hypervisor_type: qemu 2026-03-08 01:18:09.308980 | orchestrator | 2026-03-08 01:18:05 | INFO  | Setting property os_distro: cirros 2026-03-08 01:18:09.308986 | orchestrator | 2026-03-08 01:18:05 | INFO  | Setting property os_purpose: minimal 2026-03-08 01:18:09.308993 | orchestrator | 2026-03-08 01:18:05 | INFO  | Setting property replace_frequency: never 2026-03-08 01:18:09.309000 | orchestrator | 2026-03-08 01:18:06 | INFO  | Setting property uuid_validity: none 2026-03-08 01:18:09.309006 | orchestrator | 2026-03-08 01:18:06 | INFO  | Setting property provided_until: none 2026-03-08 01:18:09.309012 | orchestrator | 2026-03-08 01:18:06 | INFO  | Setting property image_description: Cirros 2026-03-08 01:18:09.309018 | orchestrator | 2026-03-08 01:18:06 | INFO  | Setting property image_name: Cirros 2026-03-08 01:18:09.309025 | orchestrator | 2026-03-08 01:18:07 | INFO  | Setting property internal_version: 0.6.3 2026-03-08 01:18:09.309036 | orchestrator | 2026-03-08 01:18:07 | INFO  | Setting property image_original_user: cirros 2026-03-08 01:18:09.309043 | orchestrator | 2026-03-08 01:18:07 | INFO  | Setting property os_version: 0.6.3 2026-03-08 01:18:09.309049 | orchestrator | 2026-03-08 01:18:07 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-08 01:18:09.309056 | orchestrator | 2026-03-08 01:18:07 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-08 01:18:09.309062 | orchestrator | 2026-03-08 01:18:08 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-08 01:18:09.309069 | orchestrator | 2026-03-08 01:18:08 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-08 01:18:09.309075 | orchestrator | 2026-03-08 01:18:08 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-08 01:18:09.665882 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-08 01:18:11.923803 | orchestrator | 2026-03-08 01:18:11 | INFO  | date: 2026-03-07 2026-03-08 01:18:11.923856 | orchestrator | 2026-03-08 01:18:11 | INFO  | image: octavia-amphora-haproxy-2024.2.20260307.qcow2 2026-03-08 01:18:11.923889 | orchestrator | 2026-03-08 01:18:11 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260307.qcow2 2026-03-08 01:18:11.923899 | orchestrator | 2026-03-08 01:18:11 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260307.qcow2.CHECKSUM 2026-03-08 01:18:12.033550 | orchestrator | 2026-03-08 01:18:12 | INFO  | checksum: localhost | ok: "/var/lib/zuul/builds/c7f1f43b9c6c488abd8fa06041d5207b/work/logs" 2026-03-08 01:18:46.495853 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/c7f1f43b9c6c488abd8fa06041d5207b/work/artifacts" 2026-03-08 01:18:46.795912 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/c7f1f43b9c6c488abd8fa06041d5207b/work/docs" 2026-03-08 01:18:46.815279 | 2026-03-08 01:18:46.815447 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-08 01:18:47.808861 | orchestrator | changed: .d..t...... ./ 2026-03-08 01:18:47.809394 | orchestrator | changed: All items complete 2026-03-08 01:18:47.809495 | 2026-03-08 01:18:48.510592 | orchestrator | changed: .d..t...... ./ 2026-03-08 01:18:49.261572 | orchestrator | changed: .d..t...... ./ 2026-03-08 01:18:49.304621 | 2026-03-08 01:18:49.304900 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-08 01:18:49.343404 | orchestrator | skipping: Conditional result was False 2026-03-08 01:18:49.346679 | orchestrator | skipping: Conditional result was False 2026-03-08 01:18:49.369438 | 2026-03-08 01:18:49.369543 | PLAY RECAP 2026-03-08 01:18:49.369609 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-08 01:18:49.369641 | 2026-03-08 01:18:49.535743 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-08 01:18:49.537051 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-08 01:18:50.367539 | 2026-03-08 01:18:50.367706 | PLAY [Base post] 2026-03-08 01:18:50.382895 | 2026-03-08 01:18:50.383053 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-08 01:18:51.447697 | orchestrator | changed 2026-03-08 01:18:51.475134 | 2026-03-08 01:18:51.475303 | PLAY RECAP 2026-03-08 01:18:51.475371 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-08 01:18:51.475438 | 2026-03-08 01:18:51.634022 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-08 01:18:51.635104 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-08 01:18:52.569160 | 2026-03-08 01:18:52.569382 | PLAY [Base post-logs] 2026-03-08 01:18:52.585862 | 2026-03-08 01:18:52.586227 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-08 01:18:53.080607 | localhost | changed 2026-03-08 01:18:53.098344 | 2026-03-08 01:18:53.098532 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-08 01:18:53.125973 | localhost | ok 2026-03-08 01:18:53.130140 | 2026-03-08 01:18:53.130326 | TASK [Set zuul-log-path fact] 2026-03-08 01:18:53.146520 | localhost | ok 2026-03-08 01:18:53.157375 | 2026-03-08 01:18:53.157501 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-08 01:18:53.185698 | localhost | ok 2026-03-08 01:18:53.194210 | 2026-03-08 01:18:53.194438 | TASK [upload-logs : Create log directories] 2026-03-08 01:18:53.734252 | localhost | changed 2026-03-08 01:18:53.739284 | 2026-03-08 01:18:53.739456 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-08 01:18:54.282238 | localhost -> localhost | ok: Runtime: 0:00:00.007252 2026-03-08 01:18:54.286393 | 2026-03-08 01:18:54.286517 | TASK [upload-logs : Upload logs to log server] 2026-03-08 01:18:54.848415 | localhost | Output suppressed because no_log was given 2026-03-08 01:18:54.850615 | 2026-03-08 01:18:54.850731 | LOOP [upload-logs : Compress console log and json output] 2026-03-08 01:18:54.900953 | localhost | skipping: Conditional result was False 2026-03-08 01:18:54.906314 | localhost | skipping: Conditional result was False 2026-03-08 01:18:54.921379 | 2026-03-08 01:18:54.921644 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-08 01:18:54.972487 | localhost | skipping: Conditional result was False 2026-03-08 01:18:54.973146 | 2026-03-08 01:18:54.976440 | localhost | skipping: Conditional result was False 2026-03-08 01:18:54.983762 | 2026-03-08 01:18:54.983991 | LOOP [upload-logs : Upload console log and json output]